Skip to content

Details

### πŸ“… Duration: 2 Hours

Target Audience:

  • Intermediate to advanced devs, ML engineers, backend/infra pros
  • Curious professionals (2–10 years exp) who’ve heard of ChatGPT, but want to understand how it really works

***

### 🧠 Agenda Breakdown:

| Time | Segment |
| ---- | ------- |
| 0–10 min | πŸ”₯ Opening: β€œWhy LLMs Are Eating the World” β€” Real use cases |
| 10–30 min | 🧱 LLM Core Design: Transformer, Tokens, Training at Scale |
| 30–55 min | πŸ” From GPT-2 to GPT-4: Scaling Laws, Attention, MoE |
| 55–75 min | πŸ”„ How LLMs Learn to Chat: RLHF, Prompt Engineering, Safety |
| 75–95 min | 🧩 Demo: Build a RAG App (LLM + Custom Data) using LangChain/Haystack |
| 95–110 min | βš™οΈ Fine-Tuning + LoRA: Customize Open LLMs with 1 GPU |
| 110–120 min | Q&A + Show top tools & free resources (like HuggingFace, Ollama, Mistral) |

***

### 🎯 Key Concepts Covered:

  • Tokenization (BPE, embeddings)
  • Transformer architecture (multi-head attention, LayerNorm)
  • What happens during pretraining (auto-regression, dataset size)
  • RLHF & human feedback alignment
  • LoRA vs full fine-tuning (real-world tips)
  • Vector DBs + LangChain for building real AI apps
  • Prompt Engineering vs RAG vs Fine-tuning β€” when to use what

***

### πŸ’» Optional Demos (Boosts Engagement)

  1. Chat with your own PDF using RAG
  2. LoRA fine-tune Mistral-7B on custom support data
  3. Prompt Injection attack β†’ and defense

Join Zoom Meeting

[https://us02web.zoom.us/j/83139818157?pwd=TajnuQ9a0Zt9MK2yhEQOx3PrthWMM8.1](https://www.google.com/url?q=https://us02web.zoom.us/j/83139818157?pwd%3DTajnuQ9a0Zt9MK2yhEQOx3PrthWMM8.1&sa=D&source=calendar&usd=2&usg=AOvVaw2izW161cfkLboNK2GBcsjc)

Meeting ID: 831 3981 8157
Passcode: 125512

AI Algorithms
AI and Society
AI/ML
Artificial Intelligence
Python

Members are also interested in