Skip to content

RAG, LLMs & RAGAS: Making AI Answers Make Sense - 4th Houston Meet

Photo of TheTestTribe
Hosted By
TheTestTribe
RAG, LLMs & RAGAS: Making AI Answers Make Sense - 4th Houston Meet

Details

GenAI is powerful. But how do you know it’s right?
As developers and testers work with Retrieval-Augmented Generation (RAG) systems and LLM-powered applications, one big question keeps popping up: how do we evaluate the quality of these AI responses—at scale, and with context? At our 4th Houston Meetup, we’re diving deep into RAGAS, an open-source framework that is reshaping how teams assess and improve their AI applications with clarity, structure, and precision.

Date: July 17, 2025
Time: 6:00 PM – 8:00 PM CDT
Venue: Improving Office
Maps Link: https://maps.app.goo.gl/iNMML6Rg1kMm8F6m7

Talk Title: RAG and LLM Evaluation with RAGAS
Retrieval-Augmented Generation (RAG) is driving the next generation of GenAI systems—but evaluating them remains a complex challenge. In this talk, Prabhat Singh and Alessandro De La Garza from Slalom Consulting will introduce RAGAS, an open-source evaluation framework that helps teams automate the assessment of RAG systems and LLM-generated content.
You’ll discover how RAGAS goes beyond traditional NLP metrics to measure accuracy, relevance, and completeness, offering a structured workflow to improve the quality and reliability of GenAI applications.
Whether you're building RAG-based pipelines or testing LLM outputs, this session will equip you with tools and insights to make evaluation measurable—and meaningful.

Key Takeaways:

  • Understand what RAGAS is and how it works
  • Learn how to evaluate GenAI responses with actionable metrics
  • Discover techniques for automating LLM and RAG evaluations
  • Gain real-world insights into AI quality engineering workflows

Speakers:
Prabhat Singh – Architect, Quality Engineering, Slalom Consulting
With over 20 years of experience across domains, Prabhat is passionate about building future-ready QA strategies. He’s currently exploring LLMs, RAG, and GenAI testing—and helping the QE community do the same.
Alessandro De La Garza – Engineer, Quality Engineering, Slalom Consulting
Alessandro has worked in industries ranging from energy to telecom, with a strong focus on data-rich projects. He’s now on a mission to demystify AI-accelerated engineering for fellow testers and developers.

Why Attend?

  • You want to evaluate GenAI systems like a pro
  • You work with LLMs, RAG pipelines, or AI-powered applications
  • You’re exploring AI testing tools and frameworks
  • You want to connect with others on the same path

About The Test Tribe
The Test Tribe is the world’s largest software testing community, empowering testers across 130+ countries through 400+ events, expert courses, membership programs, and meetups.
By RSVPing to this event, you agree to our Terms and Conditions and Privacy Policy, and consent to be contacted by The Test Tribe and our collaborators.

Test smarter. Evaluate deeper. Join us in Houston!

Photo of The Test Tribe Houston group
The Test Tribe Houston
See more events
Improving
10111 Richmond Ave #100 · Houston, TX