
What we’re about
MLOps London is a community of like-minded engineers, developers and data scientists who focus on the challenges faced while building and deploying production Machine Learning systems at scale. It is an open and diverse place to meet, share experiences and learn from each other.
Our regular meetups are an opportunity to hear from leaders in the field about challenges, tooling and best practices. If you are interested in speaking or want to suggest a great talk you’ve heard before please let us know (no vendor or recruitment pitches please).
Submit a talk: email marketing@seldon.io
Watch the previous talks on the MLOps London YouTube Channel (like and subscribe!): https://www.youtube.com/@mlopslondon
Join the Seldon slack community to continue the discussions: https://join.slack.com/t/seldondev/shared_invite/zt-2tzqte1y8-yG1RsHnWpimYcOqAnMk2JA
Upcoming events (1)
See all- MLOps London April 2025Rise London, London
## Details:
Livestream: coming soon
⚠️ In Person Form: https://forms.gle/SQCDSoNALpVUJ11N9The clouds are shining and the pollen allergies are blooming, which can only mean one thing--spring is arriving in London and it's time to gather indoors for talks on LLMOps and more!
## Agenda:
⏱️ 6:00 pm onwards - Arrival and networking
⏱️ 6:30 pm - Kick off and welcome with Alex Housely (Founder at Seldon)## Speakers:
🎙️ Navigating LLM Deployment: Tips, Tricks, and Techniques
Meryem Arik, CEO & Co-founder TitanMLJoin Meryem Arik in a discussion on the best practices in model optimization, serving and monitoring - with practical tips and real case-studies.
🎙️ Cost Optimisation Strategies for ANNA’s AI Accountant: Practical LLM Batching & Prompt Caching
Nikolai Turusin, Lead Data Scientist at ANNADrawing on ANNA’s experience processing 2 billion tokens each month, with system prompts reaching 50k tokens — we’ll show you how using batch APIs and prompt caching techniques can dramatically cut expenses in production LLM systems.
✨✨✨More Speaker Announcements Coming Soon!✨✨✨