What we’re about
We are a group of startup engineers, research scientists, computational linguists, mathematicians, philosophers, and others interested in understanding the meaning of text, reasoning, and human intent through technology. We want to apply our understanding to building new businesses and improving overall human experience in the modern connected world. The MIND Stack explained: mind.wtf.
This is a technical AI meetup: we build systems with Machine Learning on top of Data Pipelines, and concern ourselves with the stuff we can try in open source, learn, improve, and model human behavior in industry for practical results.
The advisory board for this meetup is Cicero Institute (Cicero.ai), and its conferences are AI.vision and self.driving.cars. We like specific technical problems (self-driving cars) and the way they inform better higher-level inference of the future of AI (AI.vision).
Upcoming events (4+)See all
- Building Production-ready GenAI Apps with RAG (Weaviate/deepset/Unstructured)NEON, San Francisco, CA
Join us for a fun evening with snacks, drinks, and lots of knowledge to unlock the true potential of AI! Integrate your vast internal knowledge base, build production-ready RAG (Retrieval Augmented Generation) pipelines, and guide your model to produce accurate results.
- Ever felt like your LLM daydreams its own alternative facts?
- Ever hit a knowledge wall because your AI's memory just isn't expansive enough or its wisdom doesn't stretch far enough?
- Tired of digging deep into your pockets just to keep that model finely tuned?
Customizing LLM Applications with Haystack
Every LLM application comes with a unique set of requirements, use cases and restrictions. Let's see how we can make use of open-source tools and frameworks to design around our custom needs.
Build bulletproof generative AI applications with Weaviate and LLMs
Building AI applications for production is challenging. Your users don't like to wait. If you deliver the right results in milliseconds instead of seconds, you can win them over. Production-grade pipelines also need to prevent the LLM from making up false facts. We'll show you how to solve this with live demos and ready-to-fork open-source GitHub projects using Weaviate, your most beloved open-source vector database.
Head of Developer Growth at weaviate.io
Lead Developer Advocate at deepset.ai
DevRel Engineer at unstructured.io
- TED AI Hackathon555 California St, San Francisco, CA
Bay Area AI is happy to partner with TED AI to share the first TED AI Hackathon to be held in San Francisco at Microsoft Reactor.
Please join AI for Good work and register at the event website:
Happy to welcome Arun Gupta, who leads Open Ecosystem at Intel and previously led open source strategy at Amazon and Apple, as the organizer!
- Workshop: "Building production-ready LLM-powered applications" by Josh TobinOakland Scottish Rite Center, Oakland, CA
📣 Attention! Clear your schedules for November 13th! 🗓️
🌟 We're thrilled to host a Workshop "Building production-ready LLM-powered applications" with Josh Tobin, CEO of Gantry & co-creator of Full Stack Deep Learning.
A must-attend for anyone interested in #AI & #LLM! 🌟
Food and drinks are included 🍕🥤
About the course
The way AI-powered apps are built has changed:
- Before LLMs, an idea would bottleneck on training models from scratch, and then it'd bottleneck again on scalable deployment.
- Now, a compelling MVP based on pretrained LLM models and APIs can be configured and serving users in an hour.
An entirely new ecosystem of techniques, tools, and tool vendors is forming around LLMs. Even ML veterans are scrambling to orient themselves to what is now possible and figure out the most productive techniques and tools.
In this course, we'll teach you how to build AI-powered applications from scratch, while following the best practices that will allow you to balance shipping quickly with building high-quality, production-ready applications your users trust. We'll walk you through a structured approach to AI app development loosely based on the test-driven development methodology used in traditional software engineering.
Gantry is building product testing and analytics for AI-powered applications.
Testing and analytics are essential tools when building any product, but they’re even more essential for AI-based applications. That’s because these applications fail in harder-to-detect ways, and those failures erode user trust over time, eventually leading to churn. Gantry helps you build AI your users trust through powerful observability, analytics, and evaluation for your AI-powered products.
About The Full Stack
Building an AI-powered product is much more than just training a model or writing a prompt.
The Full Stack brings people together to learn and share best practices across the entire lifecycle of an AI-powered product: from defining the problem and picking a GPU or foundation model to production deployment and continual learning to user experience design.
We've taught courses in building deep learning-powered applications at UC Berkeley, in-person, and online. More recently, we hosted the first LLM bootcamp focused on teaching practitioners how to build LLM-powered applications, from prompt engineering to retrieval augmentation and AI-first design. Our courses have featured contributed lectures from Pieter Abbeel (Professor at UC Berkeley and co-founder of Covariant), Richard Socher (co-founder of You.com and former Chief Scientist / EVP at Salesforce), and Andrej Karpathy (OpenAI researcher and former Director of AI at Tesla).
Our courses have been described as "high quality tokens" by Andrej Karpathy, "the most comprehensive and interesting class I ever attended" by Boris Dayma (creator of Craiyon and Dall-E mini), and "I can't believe they made this available for free" by Jo Kristian Bergum (Distinguished Engineer at Yahoo and Co-Creator of Vespa).