Skip to content

Elastic x Apache Kafka®

Photo of Alice Richardson
Hosted By
Alice R.
Elastic x Apache Kafka®

Details

IMPORTANT: RSVP WILL BE CLOSED ON THIS PAGE PLEASE RSVP @ https://www.meetup.com/elastic-toronto-user-group/events/306015499/?slug=elastic-toronto-user-group&eventId=301883000&isFirstPublish=true

Agenda:
5:30 - Doors open; come in, meet some people, grab some food
5:45 - Talk 1, presented By Adam Kasztenny, Senior Software Engineer at Elastic
6:15 - Q&A with Adam
6:30 - Break
6:45 - Talk 2, presented by Edward Vaisman, Innovation Engineer at Confluent
7:15 - Q&A with Edward
7:30 - Event wrap up

Talk 1: How would you build Elasticsearch if it was started in 2024?
Decouple compute and storage, outsource the persistence to a blob store like S3, dynamically scale up and down, have the right defaults, and a clear path for developers. This is what we have done!
In this talk, learn how we have redesigned Elasticsearch to do more with a stateless architecture that can run hot queries on cold storage. And see how you can get started with it today.

Talk 2: Context in Motion: Empowering LLMs with Near Real-Time Data and Tools
Large Language Models (LLMs) are great at a bunch of natural language tasks, and techniques like Retrieval-Augmented Generation (RAG) have made them even better by providing them with access to external knowledge. But, if we want LLMs to be truly dynamic and real-time intelligent, they need more than just information retrieval; they need the ability to interact with their environment and adapt to constantly changing conditions. This talk introduces the Model Context Protocol (MCP), an open protocol designed to work alongside existing approaches like RAG and give LLMs near real-time data streams and contextual tools.

Think about it this way: even someone with tons of skills needs the right tools to get the job done. Imagine a carpenter, one with the knowledge and expertise to construct fancy banisters would be totally stuck without their hammer, saw, and measuring tape. Or think about someone who needs reading glasses. They can understand what they should be seeing, but glasses are what lets them actually see the details clearly. It's the same with LLMs. They're trained on so much information and can solve all sorts of problems. But, just like that carpenter or someone needing glasses, a regular LLM often can't actually use that knowledge to solve your specific problem, right now. That's where (MCP) comes in. It's not about replacing what the LLM already knows; it's about giving it the extra tools and senses it needs to connect all that awesome knowledge to what's happening in the real world, in near real-time.

We’ll show how awesome and useful the Model Context Protocol is\! We'll do a demo using a real-time e-commerce example. Envision a retail store, and we'll create streams of data about shoes, customer orders, clicks, and customer profiles. We'll chat with an LLM that uses MCP to connect to these data sources. Then, we'll grab some data from these streams to show how the LLM can understand real-time info. The cool part is that MCP works with data governance tools to automatically find stuff like customer addresses in order data. This triggers a policy action, showing how MCP can change data retention policies (like setting a one-day limit for PII data) to follow privacy rules. This demo will showcase how MCP, along with existing techniques like RAG, can build strong, real-time, and policy-compliant data processing pipelines in e-commerce and other areas.

Photo of Toronto Apache Kafka® Meetup by Confluent group
Toronto Apache Kafka® Meetup by Confluent
See more events
Centre for Social Innovation
192 Spadina Ave · Toronto, ON