Road to HACK: Fine-Tune Your Own AI Model | Hands-On Workshop
Details
Large language models are impressive out of the box — but they hallucinate, they're generic, and they need the cloud.
What if you could take a small open-source model and turn it into a domain expert that runs on your own hardware?
That's exactly what we're doing in this hands-on workshop.
Luca Massaron — data scientist, bestselling AI author, 3x Kaggle Grandmaster, and Google Developer Expert — will walk you through a complete fine-tuning pipeline from scratch: evaluate a base model, generate synthetic training data, fine-tune with QLoRA, and measure the improvement. All on consumer-grade hardware.
By the end of the session, you'll have a repeatable playbook for making small models punch way above their weight in any domain you care about.
What you'll work through:
Baseline evaluation of a pre-trained model → Synthetic dataset generation from real sources → Supervised fine-tuning with QLoRA (parameter-efficient, memory-friendly) → Post-training evaluation to measure the knowledge gain → Bonus: fine-tuning for classification tasks (sentiment analysis)
What to bring:
Your laptop. The workshop runs on Google Colab (free T4 GPU tier), so no fancy hardware required — just a browser and a Google account. If you have a machine with an NVIDIA GPU (16GB+ VRAM) or an Apple Silicon Mac (16GB+ unified memory), you can run everything locally too.
This event is part of Road to HACK — the official warm-up series for GDG AI HACK 2026.
On May 8–10, 160 selected participants will compete in a 24-hour AI hackathon in Milan, with tracks spanning on-device AI, vision AI, and EdTech.
Fine-tuning is one of the most directly applicable skills you can bring to the hack.
Applications for GDG AI HACK 2026 are open until April 17 → gdgaihack.com
