

What we’re about
We're a community of builders focused on the practical engineering challenges of deploying AI in production. Whether you're working with LLMs, embeddings, RAG systems, or inference optimization, this meetup is about sharing real-world experiences building AI-powered applications.
We explore the entire AI engineering stack - from lightning-fast inference with Groq and edge deployments on Cloudflare Workers, to vector databases like Supabase and Pinecone, to scalable data layers with PlanetScale and ClickHouse. We're vendor-agnostic and technology-curious, embracing everything from open-source models to API-based solutions.
Our sessions cover:
- Building production RAG systems and semantic search
- Optimizing inference performance and cost
- Edge AI and distributed inference architectures
- Vector databases and hybrid search strategies
- Prompt engineering and fine-tuning workflows
- Observability and monitoring for AI applications
- Multi-modal applications and agent architectures
- Real-world case studies and architecture deep-dives
Whether you're deploying your first LLM application or architecting enterprise AI systems, join us to share knowledge, tackle engineering challenges, and build the next generation of AI-powered products.
Sponsors
Past events
48








