What we’re about
We are a group of startup engineers, research scientists, computational linguists, mathematicians, philosophers, and others interested in understanding the meaning of text, reasoning, and human intent through technology. We want to apply our understanding to building new businesses and improving overall human experience in the modern connected world. The MIND Stack explained: mind.wtf.
This is a technical AI meetup: we build systems with Machine Learning on top of Data Pipelines, and concern ourselves with the stuff we can try in open source, learn, improve, and model human behavior in industry for practical results.
The advisory board for this meetup is Cicero Institute (Cicero.ai), and its conferences are AI.vision and self.driving.cars. We like specific technical problems (self-driving cars) and the way they inform better higher-level inference of the future of AI (AI.vision).
Upcoming events (2)
See all- [lu.ma/_ai registration required] DSPy and GraphRAG: Making GenAI RealNeeds location
Following the AI Conference, Bay Area AI is hosting its deep dive into the teachnologies that will make GenAI Real:
— LLM Programming
— GraphRAGWe have two talks and will add more. Our GraphRAG section represents the GraphRAG ecosystem with Neo4j, Pinecone, and other partners.
1. DSPy: Prompt Optimization for LM Programs
Michael Ryan, StanfordIt has never been easier to build amazing LLM powered applications. Unfortunately engineering reliable and trustworthy LLMs remains challenging. Instead, practitioners should build LM Programs comprised of several composable calls to LLMs which can be rigorously tested, audited, and optimized like other software systems. In this talk I will introduce the idea of LM Programs in DSPy: The library for Programming — not Prompting LMs. I will demonstrate how the LM Program abstraction allows the creation of automatic optimizers for LM Programs which can optimize both the prompts and weights in an LM Program. I will conclude with an introduction to MIPROv2: our latest and highest performing prompt optimization algorithm for LM Programs.
Michael Ryan is a masters student at Stanford University working on optimization for Language Model Programs in DSPy and Personalizing Language Models. His work has been recognized with a Best Social Impact award at ACL 2024, and an honorable mention for outstanding paper at ACL 2023. Michael co-lead the creation of the MIPRO & MIPROv2 optimizers, DSPy’s most performant optimizers for Language Model Programs. His prior work has showcased unintended cultural and global biases expressed in popular LLMs. He is currently a research intern at Snowflake.
2a. Graphs and AI: Making it Real
Alexy Khrabrov, AI Community Architect, Neo4jGraphRAG is one of the most promising architectures for enterprise AI. In this talk, we’ll explore technical and community efforts required to make GenAI ready for production, with the focus on the most recent advances in GraphRAGw with LangChain and LlamaIndex integrations.
Dr. Alexy Khrabrov is the founder and organizer of bay.area.ai and the AI Community Architect at Neo4. He is also a founder of opensource.science at NumFOCUS and a cofounder of thealliance.ai. Alexy founded and runs scale.bythebay.io, a conference of the Bay Area developer meetups, for ten years.
2b. Vectors and Graphs - Better Together
This short talk will explore how graph database and vector databases can be made to work in tandem in agentic (and semi-agentic) ways to deliver unique ways to analyze complex, interconnected data sets.
Roie Schwaber-Cohen is a Staff Developer Advocate at Pinecone, specializing in AI and data-intensive applications. With nearly 20 years of experience in software engineering, Roie has expertise in full-stack development, microservices architectures and data intensive applications.
2c. Improving RAG with Knowledge Graph and Milvus
Jiang Chen, ZillizThe talk covers the techniques of Knowledge Engineering that improve RAG quality and shows how to offline extract knowledge graph and implement a comprehensive retrieval method by storing and searching knowledge embeddings in Milvus.
Jiang Chen is the Head of Ecosystem and Developer Relations at Zilliz, the company behind the open-source vector database Milvus. He had previously served as a tech lead and product manager at Google, where he led the development of web-scale semantic understanding and search indexing that powers innovative search products such as short video search. He has years of industry experience handling massive unstructured data and multi-modal content retrieval. Jiang holds a Master's degree in Computer Science from the University of Michigan.
You'll need to have the Luma registration and an ID that matches it to attend:
- TED AI HackathonNeeds location
We’re happy t be the community partners of the TED AI Hackathon.
Please register by September 13, approval is required to join!
https://www.aicamp.ai/event/eventdetails/W2024101910
—
Welcome to the TED AI Hackathon 2024 registration! The hackathon will run from 10AM Saturday October 19th to 4PM Sunday October 20th, at the Microsoft Reactor in San Francisco. Apply to hack with us. The deadline of application is September 13th. All applicants will be updated by September 20th on selections.
TED AI Open Source Hackathon is an exciting competition that provides an opportunity for participants to showcase their coding skills, collaborate with fellow developers, and create innovative AI-based solutions, all to promote with the purpose of promoting open-source applications of AI to urgent societal or environmental issues.
Team prizes:
In addition to an incredible experience, you'll have the chance to compete for a $15K cash prize, split among the top three winning teams, which will be divided among the top three winning teams.
* 1st place: $7k.
* 2nd place: $5k
* 3rd place: $3kHackathon Timeline:
* Sep 13th: Application Close;
* Sep 20th: Applicants Announced;
* Oct 1st: Prep Call;
* Oct 19~20: Hackathon!Sponsors:
* AWS
* Google Cloud
* GenLab
* Vectara