March 11 - Strategies for Validating World Models and Action-Conditioned Video
78 Teilnehmer aus 47 Gruppen Gruppen veranstalten
Veranstaltet von München AI, Machine Learning and Computer Vision Meetup
Details
Join us for a one hour hands-on workshop where we will explore emerging challenges in developing and validating world foundation models and video-generation AI systems for robotics and autonomous vehicles.
Time and Location
Mar 11, 2026
10-11am PST
Online, Register for the Zoom!
Industries from robotics to autonomous vehicles are converging on world foundation models (WFMs) and action-conditioned video generation, where the challenge is predicting physics, causality, and intent. But this shift has created a massive new bottleneck: validation.
How do you debug a model that imagines the future? How do you curate petabyte-scale video datasets to capture the "long tail" of rare events without drowning in storage costs? And how do you ensure temporal consistency when your training data lives in scattered data lakes?
In this session, we explore technical workflows for the next generation of Visual AI. We will dissect the "Video Data Monster," demonstrating how to build feedback loops that bridge the gap between generative imagination and physical reality. Learn how leading teams are using federated data strategies and collaborative evaluation to turn video from a storage burden into a structured, queryable asset for embodied intelligence.
About the Speaker
Nick Lotz is chemical process engineer-turned-developer who is currently a Technical Marketing Engineer at Voxel51. He is particularly interested in bringing observability and security to all layers of the AI stack.
