

What we’re about
What's happening Stockholm. We are firing up a local MLOps chapter for this amazing city!
The MLOps Community fills the need to share real-world Machine Learning Operations best practices from engineers in the field. While MLOps shares a lot of ground with DevOps, the differences are as big as the similarities. We needed a community laser-focused on solving the unique challenges we deal with every day building production ML pipelines.
We’re in this together. Come learn with us in a community open to everyone. Share knowledge. Ask questions. Get answers.
You can check out our Slack or our podcast that’s filled with tips and tricks to overcoming the common obstacles we’ve all hit in the real world. Find the solutions you need. Share, learn, and grow with us, as we work to bring standardization to the chaotic world of MLOps
Upcoming events (1)
See all- Stockholm MLOps Summer Bash!!!Netlight Consulting, Stockholm
All right!!!
We're super excited to announce our next meetup!!!
This will take place June 18 at Netlight's offices at Regeringsgatan 25 in Stockholm and be sponsored by AWS and Netlight.
The program is being finalized, so stay tuned for updates! Like last year's Summer Bash at AI Sweden, we will invite a number of start-ups to showcase what they're up to and how they tackle different aspects of AI in production.
The format will be six or so 10-15 min talks, pizza break and then a moderated Q&A.
Event Program
- Doors open at 17:00 CET
- Talks begin at 17:45 CET
- There will be light food & drinks
- The moderated Q&A session will kick off around 19:45 CET.
Speaker Line-Up is under construction. Stay tune for further additions.
-
Sebastian Thunman, Co-Founder of Strawberry will give a talk titled "Behind the scenes of taking on Google Chrome". Abstract: Sebastian shares the story of how Strawberry came to be, why they're taking on Chrome, how they're doing it and their vision for the future of the internet."
-
Lucas Ferreira, CEO of Inceptron will give a talk titled "AI inference economics are broken". Abstract: AI model sizes are ballooning, pushing inference costs and energy use to unsustainable levels. Coupled with limited GPUs and power constraints, this is breaking AI inference economics. This talk outlines a compiler-first approach to optimize models per use-case, slashing inference cost by ~60% and tripling throughput on existing hardware.
/Patrick
-