How to Build an On-Premise LLM Finetuning Platform


Details
Aziz (Aleph Alpha) will talk about How to Build an On-Premise LLM Finetuning Platform in which we will be exploring different fine-tuning approaches — including LoRA, QLoRA, and full finetuning — and discuss when to use each. We’ll also show how to implement dynamic worker scheduling and automatic GPU resource allocation, helping you streamline training workflows and turbocharge your engineering teams — all while ensuring your data stays securely on your own infrastructure.
If you want to join this event, please sign up on our Luma page: https://lu.ma/dl0nl7hb
⚠️ Registration is free, but required due to building security.
🔈 Speakers:
- Aziz Belaweid, AI Engineer
Agenda:
✨ 18:30 Doors open: time for networking with fellow attendees
✨ 19:00 Talk and Q&A
✨ 20:00 Mingling and networking with pizza and drinks
✨ 21:00 Meetup ends
- Where: In person, Aleph Alpha Berlin, Ritterstraße 6
- When: Tuesday, June 24th
- Language: English

How to Build an On-Premise LLM Finetuning Platform