Skip to content

RAG WARS - Advancing AI

Photo of Roger Rappoport
Hosted By
Roger R.
RAG WARS - Advancing AI

Details

Join us (Suleman Kazi, Nikhil Bysani, Ofer Mendelevitch & Rogger Luo) as we explore advanced techniques to enhance the utility and reliability of Large Language Models (LLMs) across diverse applications, including structured outputs and external function integration, robust enterprise data architecture, and strategies for reducing hallucinations.
The talks cover a spectrum of methods to optimize both the performance and accuracy of LLM and RAG-based systems in real-world settings.

To be admitted in-person, you must register on our Luma event page using the following link: https://lu.ma/ragwars

(If you can't attend LIVE, then you can sign up for the LIVE STREAM and REPLAY here): https://crowdcast.io/c/ragwars

Vectara, The Trusted GenAI Platform 🚀 is proud to host "RAG WARS - Advancing AI: Enhancing LLMs and RAG for Improved Performance ⛏️ & Reliability" 🛠
📣 Join fellow industry peers for an evening of thought leadership talks 💡 and social networking 🥂 as we do a deep dive into the hottest topic in LLMs and Retrieval Augmented Generation (RAG) 🔥
This meetup explores advanced techniques to enhance the utility and reliability of Large Language Models (LLMs) across diverse applications. From structured outputs and external function integration to robust enterprise data architecture and strategies for reducing hallucinations, the talks cover a spectrum of methods to optimize both the performance and accuracy of LLM and RAG-based systems in real-world settings.

Talks are listed in order of presentation:
[1] Structured Output and Function Calling for Large Language Models
Suleman Kazi, ML @ Vectara
Ever wanted your LLM to produce output in a particular format (JSON, CSV, XML….) so you can easily parse it out or use it in a downstream task? How about giving it access to external functions that perform a task or return information that the LLM does not have access to? In this talk, you’ll learn about doing both of these tasks, known as structured output and function calling, respectively. We’ll talk about how they are useful and how you can enable their use with open-source LLMs on HuggingFace.

[2] Enterprise data architecture in machine learning and RAG systems
Nikhil Bysani, Engineering @ Vectara

  • Best practices in storing and consuming data to be used in ML systems, like a data lake/warehouse, s3, event driven systems
  • Talk about data lifecycle and best ingestion practices with vectara
  • Talk about managing state and synchronization of data between Vectara and other data systems

[3] Strategies for Mitigating Hallucination in Large Language Models
Rogger Luo, ML @ Vectara
Hallucination poses a significant challenge to the usability and reliability of LLM applications. In this presentation, we offer an insightful overview of contemporary methods aimed at mitigating hallucination in summarization, drawing from our own practical experiences with these techniques. Our examination reveals that these methods can be broadly categorized into three main approaches: Alignment with Fine-tuning (DPO), Control at Inference (DoLA), and Post-Editing(FAVA).
The whole conversation will be moderated by: - Ofer Mendelevitch, Head of Developer Relations at Vectara

This event is open for everyone to join, so save the date and meet us at 6 pm PST on June 19th. Let's explore the cutting-edge of RAG together while networking and enjoying food and drinks! 🚀
Get a head start and sign up for Vectara before the event!

Photo of San Francisco Startups and Investors group
San Francisco Startups and Investors
See more events
Procopio, Cory, Hargreaves & Savitch LLP
5 Palo Alto Square Suite 400 · Palo Alto, CA