Skip to content

AI Meetup (in-person): Generative AI and LLMs - Halloween Edition

Photo of April
Hosted By
April
AI Meetup (in-person): Generative AI and LLMs - Halloween Edition

Details

Pre-registration is required, complete your RSVP here: https://www.aicamp.ai/event/eventdetails/W2023103014

Welcome to the monthly in-person AI meetup in Washington DC, in collaboration with TruEra and Zilliz, Halloween Edition. Join us for spooky tech talks on AI/ML, food/drink, networking with speakers & fellow developers.

Agenda:

  • 5:30pm~6:00pm: Checkin, Food and Networking
  • 6:00pm~6:10pm: Welcome/community update
  • 6:10pm~8:00pm: Tech talks
  • 8:00pm: Q&A and Open discussion
  • 8:30pm: head to the bar

Tech Talk 1: Detecting and Debugging AI Agent Drift
Speaker: Josh Reini, DevRel Engineer @TruEra
Abstract: In a LLM-powered autonomous agent system, LLMs function as the agent’s brain by using planning and inference to decide what tools to use and how. These tools can act as real-time data sources or perform real actions such as ordering food delivery, booking flights or setting doctor’s appointments. However LLM agents, like other ML systems, are subject to drift. This talk will cover the causes of LLM drift, and how to identify and debug them with open source TruLens.

Tech Talk 2: Caching for ChatGPT and More with Vector Databases
Speaker: Yujian Tang, Developer Advocate @Zilliz
Abstract: A few months ago, AI was “hip” and “cool”, but it wasn’t mainstream. Then, ChatGPT single handedly put AI, and large language models (LLMs) in particular, on everyone’s radar. Since then, people have made all sorts of applications using GPT and its extensions including a bot to automatically order pizza.
Despite all the potential of LLMs, they still have some limitations. In this talk, we will cover how to overcome some of these limitations by using vector databases to inject domain specific knowledge. We will also share some open source tools that cache LLM responses helping you decrease the cost and increase the performance of your LLM app.

Tech Talk 3: LLMs for Continuous Research
Speaker: Nisha Iyer, Advisor @ CoNote
Abstract: CI/CD embraces the power of continuous work, agile development and ability to iterate and move fast. The missing piece is often research, alongside the stealth process of CI/CD. Traditional UX Research cycles take time and effort, as they should. With CoNote, an AI Product that scales and charges your research. Today, I will talk through what makes our AI engine unique and how we use LLMs in production. I will also talk through some of the exciting AI features to come and some ideating that we are doing as a team to make our engine even more powerful.

Stay tuned as we are updating speakers and schedules.
If you have a keen interest in speaking to our community, we invite you to submit topics for consideration: Submit Topics

Sponsors:
We are actively seeking sponsors to support our AI developers community. Sponsors will receive speaking opportunities, sponsor recognition, and post-event emails to our vast membership base of 3k+ in DC or 250K+ developers in global. Contact us for details.

Community Partners:
Contact us if you are interested in partnership.

Community on Slack
- Event chat: chat and connect with speakers and attendees
- Sharing blogs, events, job openings, projects collaborations
Join Slack (search and join #washingtondc channel)

Photo of DC AI Developers Group group
DC AI Developers Group
See more events
Martin Luther King Jr. Memorial Library
901 G St NW · Washington, DC