Zum Inhalt springen

Details

Title: Fine-Tuning vs RAG: What Actually Works for Specializing LLMs?
Description:
Hello everyone!
I'm excited to invite you to our next Machine Learning group meetup.
Agenda:
Presentation: Fine-Tuning vs RAG -- Benchmarking Approaches for Specializing Large Language Models
Presenter: Theodoros Messinis, AI Software Architect
Abstract:
When you need an LLM to perform a specialized task, should you fine-tune it or augment it with Retrieval-Augmented
Generation (RAG)? Or both?
In this presentation, we'll walk through a hands-on comparison of four approaches -- Base Model, Fine-Tuned, RAG, and
Hybrid -- across four real tasks: financial sentiment analysis, numerical reasoning, financial ratio calculation, and
spam/phishing detection. We'll also explore whether model size matters by comparing a fine-tuned DistilBERT (66M
parameters) against a fine-tuned GPT-4o-mini (~8B parameters) on the same task.
Key topics covered:
- How fine-tuning teaches models new skills while RAG provides information
- Controlled benchmarks across BERT (110M), Llama2 (7B), and DistilBERT (66M) architectures
- Adversarial stress testing: where do these approaches break?
- LLM-as-Judge evaluation using GPT-4o for structured scoring
- Does 121x more parameters justify the cost? A model size analysis
- Live demo of all approaches running side by side
- A practical decision framework for choosing the right approach
Whether you are an expert in the field or just getting started, your insights and questions are welcome. This is a
great opportunity to share knowledge, explore new ideas, and build connections with fellow ML enthusiasts.
Details:
Date: 26/03/2026
Time: 20:00 CET
Location: Join via Zoom -- https://us06web.zoom.us/j/3426020722?pwd=RWVkekFXTld3SWdKK05BSnhlei9BQT09
Github: https://github.com/intelliswarm-ai/finetune-vs-rag
Looking forward to meeting you all and having a great conversation. See you soon!
Best Regards,
Theodoros Messinis

Das könnte dir auch gefallen