Workshop: Master Fine-Tuning LLMs — From Weights to Wisdom


Details
Minimum 2 years of experience. Freshers and college students will not be permitted, so please excuse this event.
Lunch is on your own, will not be provided
## Workshop Overview
Large Language Models (LLMs) are powerful, but out of the box they’re generic.
To make them truly useful in your domain, you need to fine-tune them.
This hands-on workshop will take you beyond prompt engineering into the world where you retrain and adapt models by updating weights — the real deal.
### What You’ll Learn
We’ll cover all three approaches to fine-tuning:
- Full Fine-Tuning – Updating all weights of the model for maximum customization.
- When to use it (and when it’s overkill).
- GPU/compute requirements.
- Walkthrough: fine-tuning a GPT-like model on a custom dataset.
- Parameter-Efficient Fine-Tuning (PEFT) – LoRA, QLoRA, adapters.
- Update only a small subset of weights (millions vs billions).
- Train on a laptop or low-cost GPU.
- Demo: LoRA fine-tuning for domain-specific chatbots.
- Instruction Fine-Tuning & Alignment (SFT + RLHF) –
- Teach models to follow instructions, tone, and style.
- Use reward models and human feedback loops.
- Hands-on: building a mini RLHF pipeline.
### Who Should Attend
- ML Engineers & Data Scientists who want to go beyond prompt hacks.
- Developers exploring custom AI assistants for their company or product.
- Researchers curious about alignment & efficiency tricks.
- Startup founders wanting to differentiate their AI with domain-trained LLMs.
### Takeaways
-
Hands-on code notebooks you can reuse.
-
Understanding of trade-offs between the 3 approaches.
-
A fine-tuned mini-LLM you can deploy.
-
Confidence to choose the right method for your projects.
-
⚡ Seats are limited — this workshop is designed to be interactive and practical.
Sign up now and start bending LLMs to your will.
***
## Prerequisites
- Basic Python knowledge.
- Laptop with Colab notebook access.
- Curiosity and enthusiasm (we'll teach you everything else from the basics!)

Workshop: Master Fine-Tuning LLMs — From Weights to Wisdom