RAG/Local LLM
Details
Join us for an exciting deep dive into the world of RAG (Retrieval-Augmented Generation) and Local LLMs (Large Language Models)! Whether you're new to the field or an experienced practitioner, this meetup is designed to explore the cutting-edge tools and strategies for deploying and leveraging local LLMs for retrieval-augmented workflows.
What We'll Cover
- Tech Stack Overview:
- Ollama: Streamline local LLM management and inference.
- OpenWebUI: Simplify and enhance the local LLM experience with a flexible UI.
- LangFlow: Design, debug, and visualize LLM pipelines with ease.
- QdrantDB: High-performance vector search for efficient document retrieval.
- How RAG Works:
We'll break down the Retrieval-Augmented Generation process—combining vector-based retrieval with LLMs to generate context-aware, accurate, and efficient responses. - Practical Applications:
Learn how to use this stack for real-world use cases like custom chatbots, knowledge management, summarization, and more—all while keeping your data private and under your control.
Related topics
Events in Fort Lauderdale, FL
Computer Security
Cybersecurity
Ethical Hacking
Hacking
Information Security
