AI Deep Dive: LLMs and Vector Databases


Details
Welcome to our in-person AI meetup, in collaboration with Google Developers Group. Join us for deep dive tech talks on AI/ML, food/drink, networking with speakers&peers developers, and win lucky draw prizes.
Pre-registration is required: https://www.aicamp.ai/event/eventdetails/W2023090610
[RSVP instructions]
- Pre-register at the event website. (venue security may not let you in if you don't pre-register)
- Contact us to submit topics and/or sponsor the meetup on venue/food/swags/prizes. https://forms.gle/JkMt91CZRtoJBSFUA
- Community on Slack for events chat, speakers office hour and learning resources, job openings and projects collaboration. join slack (search and join #london channel)
Description:
This meet-up is a unique opportunity to connect with fellow AI enthusiasts, industry practitioners, and researchers in a dynamic and interactive setting. Whether you are a seasoned AI professional or just curious about the latest advancements in Generative AI, LLMs and Vector Databases, this meet-up is for you! Join us for an insightful and thought-provoking discussion on the forefront of AI innovation and practices.
Agenda (BST):
* 6:00pm~6:30pm: Checkin, Food/Snacks/Drink and networking
* 6:30pm~6:45pm: Welcome/community update
* 6:45pm~7:45pm: Tech talks
* 7:45pm: Open discussion & Mixer
Tech Talk 1: Using Vector Databases with Multimodal Embeddings and Search At Scale
Speaker: Zain Hasan @Weaviate
Abstract: Many real-world problems are inherently multimodal, from the communicative modalities humans use such as spoken language and gestures to the force, proprioception, and visual sensors ubiquitous in robotics. In order for machine learning models to address these problems and interact more naturally and holistically with the world around them and ultimately be more general and powerful reasoning engines we need them to understand data across all of its corresponding image, video, text, audio, and tactile representations.
In this talk we will discuss how we can use open-source multimodal models, that can see, hear, read, and feel data(!), to perform cross-modal search(searching audio with images, videos with text etc.) at the billion-object scale with the help of open source vector databases. I will also demonstrate, with live code demos and large-scale datasets, how being able to perform this cross-modal retrieval in real-time can help users add natural search interfaces to their apps. This talk will revolve around how we scaled the usage of multimodal embedding models in production and how you can add cross-modal search into your apps.
Tech Talk 2: Bringing LLMs to Your Data
Speaker: JP Huang @Weaviate
Abstract: In this talk, JP explains how Weaviate redefines what you thought was possible in a database. JP will begin by showing how you can use Weaviate to effectively search data, before moving on to show you how you can use generative search (retrieval augmented generation) with Weaviate to transform your data at retrieval time with LLMs.

AI Deep Dive: LLMs and Vector Databases