Skip to content

Details

In this series, we dive deep into our most popular, fully-featured, and open-source RAG solution: https://aka.ms/ragchat

How can you be sure that the RAG chat app answers are accurate, clear, and well formatted? Evaluation!
In this session, we'll show you how to generate synthetic data and run bulk evaluations on your RAG app, using the azure-ai-evaluation SDK. Learn about GPT metrics like groundedness and fluency, and custom metrics like citation matching. Plus, discover how you can run evaluations on CI/CD, to easily verify that new changes don't introduce quality regressions.

This session is a part of a series. To learn more, click here

Sponsors

Microsoft Reactor YouTube

Microsoft Reactor YouTube

Watch past Microsoft Reactor events on-demand anytime

Microsoft Learn AI Hub

Microsoft Learn AI Hub

Learning hub for all things AI

Microsoft Copilot Hub

Microsoft Copilot Hub

Learning hub for all things Copilot

Microsoft Reactor LinkedIn

Microsoft Reactor LinkedIn

Follow Microsoft Reactor on LinkedIn

You may also like