🧠Teaching AI Something It Was Never Trained On (RAG Tutorial)
Details
Let's learn RAG!
RAG (Retrieval-Augmented Generation) is how you give an AI a document it has never seen and make it an instant expert. Instead of retraining the model, you store your documents in a searchable index. When you ask a question, the system finds the relevant passages and hands them to the AI along with your question. The AI reads, reasons, and answers, all from material it never trained on. No cloud. No fine-tuning. No guesswork.
Now, what if you handed an AI a document about a programming language it has never seen, syntax invented from scratch, never published, nowhere in its training data, and asked it to answer questions about it?
That's exactly what we're doing.
We built a real, working symbolic programming language from scratch. No English keywords. Every control structure is a symbol. No LLM has ever heard of it. We're going to watch a local AI answer questions about it accurately, write working code in it, and refuse to hallucinate features that don't exist yet.
Then we'll open the hood and talk about why it works.
The three live tests:
- Ask it a real question about the language and watch it cite the docs and produce correct code
- Ask it about a feature that doesn't exist yet and see if it fakes it or admits the gap
- Ask it something totally off-topic and see the clean refusal
After the demo (~20 min): How RAG actually works. Vector search, hybrid search, why fine-tuning gets this wrong, and the whole local stack (Ollama, OpenWebUI, nomic-embed-text) running on your own hardware with no cloud required.
No prior AI knowledge needed. Last 15 minutes are open Q&A, so bring hard questions.
All demos run locally. No data leaves the room.
