Skip to content

Details

*** IMPORTANT ***

To be admitted in-person, you must register on our Luma event page using the following link:

https://lu.ma/ragwars

(If you can't attend LIVE, then you can sign up for the LIVE STREAM and REPLAY here):

https://crowdcast.io/c/ragwars

โ€‹Vectara, The Trusted GenAI Platform ๐Ÿš€ is proud to host "RAG WARS - Advancing AI: Enhancing LLMs and RAG for Improved Performance โ›๏ธ & Reliability" ๐Ÿ› 

โ€‹๐Ÿ“ฃ Join fellow industry peers for an evening of thought leadership talks ๐Ÿ’ก and social networking ๐Ÿฅ‚ as we do a deep dive into the hottest topic in LLMs and Retrieval Augmented Generation (RAG) ๐Ÿ”ฅ

*** Overview ***

This meetup explores advanced techniques to enhance the utility and reliability of Large Language Models (LLMs) across diverse applications. From structured outputs and external function integration to robust enterprise data architecture and strategies for reducing hallucinations, the talks cover a spectrum of methods to optimize both the performance and accuracy of LLM and RAG-based systems in real-world settings.

*** AGENDA ***

  • 6:00 PM - 7:00 PM - Networking and Food Served
    Enjoy some bites while networking with peer thought leaders in the LLM and GenAI space.
  • 7:00 PM - 9:00 PM - Sessions

Talks are listed in order of presentation:

[1] Structured Output and Function Calling for Large Language Models

Suleman Kazi, ML @ Vectara

Ever wanted your LLM to produce output in a particular format (JSON, CSV, XMLโ€ฆ.) so you can easily parse it out or use it in a downstream task? How about giving it access to external functions that perform a task or return information that the LLM does not have access to? In this talk, youโ€™ll learn about doing both of these tasks, known as structured output and function calling, respectively. Weโ€™ll talk about how they are useful and how you can enable their use with open-source LLMs on HuggingFace.

[2] Enterprise data architecture in machine learning and RAG systems

Nikhil Bysani, Engineering @ Vectara

  • Best practices in storing and consuming data to be used in ML systems, like a data lake/warehouse, s3, event driven systems
  • Talk about data lifecycle and best ingestion practices with vectara
  • Talk about managing state and synchronization of data between Vectara and other data systems

[3] Strategies for Mitigating Hallucination in Large Language Models

Rogger Luo, ML @ Vectara

Hallucination poses a significant challenge to the usability and reliability of LLM applications. In this presentation, we offer an insightful overview of contemporary methods aimed at mitigating hallucination in summarization, drawing from our own practical experiences with these techniques. Our examination reveals that these methods can be broadly categorized into three main approaches: Alignment with Fine-tuning (DPO), Control at Inference (DoLA), and Post-Editing(FAVA).

The whole conversation will be moderated by: - Ofer Mendelevitch, Head of Developer Relations at Vectara

This event is open for everyone to join, so save the date and meet us at 6 pm PST on June 19th. Let's explore the cutting-edge of RAG together while networking and enjoying food and drinks! ๐Ÿš€

Get a head start and sign up for Vectara before the event!

***

## โ€‹

Events in Palo Alto, CA
Artificial Intelligence Applications
Machine Learning
Data Science
Chatbots
Search, Information Retrieval

Members are also interested in