
What weβre about
Join us for the first inference & vLLM technical meetup in London, bringing together AI practitioners, infrastructure and inference experts, as well as companies using vLLM in production.
Whether you're experimenting with vLLM or running large-scale inference workloads, this event is for you. Expect hands-on insights, real-world feedback, and open discussions with others working on optimizing inference at scale.
π Location: London, UK
π Time: 6:30PM β 10:00PM
π¬ Format: In-person
Agenda:
6:30 β 7:00 PM: Welcome
7:00 β 8:30 PM: Talks
- Exxa - Etienne Balit (co-founder & CTO): intro to vLLM & deep-dive on speculative decoding
- Hiverge - Alhussein Fawzi (co-founder & CEO): topic to be announced soon
- Doubleword - Jamie Dborin (co-founder): batched inference
8:30 β 10 PM: Open networking & drinks + pizzas
Weβll discuss performance optimizations, scaling strategies, hardware compatibility, and more.
π― Who should come?ML engineers, infra & DevOps teams, AI founders, and anyone working on inference, using or evaluating vLLM in their stack.
Upcoming events
1
London First inference & vLLM meetup (Gen-AI)
The Loading Bay - at Techspace, 25 Luke St, EC2A 4DS, London, GBJoin us for the first inference & vLLM technical meetup in London, bringing together AI practitioners, infrastructure and inference experts, as well as companies using vLLM in production.
Whether you're experimenting with vLLM or running large-scale inference workloads, this event is for you. Expect hands-on insights, real-world feedback, and open discussions with others working on optimizing inference at scale.
π Location: London, UK
π Time: 6:30PM β 10:00PM
π¬ Format: In-personAgenda:
- 6:30 β 7:00 PM: Welcome
- 7:00 β 8:30 PM: Talks
- Exxa - Etienne Balit (co-founder & CTO): intro to vLLM & deep-dive on speculative decoding
- Hiverge - Alhussein Fawzi (co-founder & CEO): topic to be announced soon
- Doubleword - Jamie Dborin (co-founder): batched inference
- 8:30 β 10 PM: Open networking & drinks + pizzas
Weβll discuss performance optimizations, scaling strategies, hardware compatibility, and more.
ποΈDo you want to become a speaker?
We're looking for speakers to share their technical experience with inference & vLLM. If you're interested, please fill this form π Linkπ― Who should come?
ML engineers, infra & DevOps teams, AI founders, and anyone working on inference, using or evaluating vLLM in their stack.
ποΈ Free registration β spots are limited7 attendees