Skip to content

Details

Join us for a one hour hands-on workshop where we will explore emerging challenges in developing and validating world foundation models and video-generation AI systems for robotics and autonomous vehicles.

Time and Location

Mar 11, 2026
10-11am PST
Online, Register for the Zoom!

Industries from robotics to autonomous vehicles are converging on world foundation models (WFMs) and action-conditioned video generation, where the challenge is predicting physics, causality, and intent. But this shift has created a massive new bottleneck: validation.

How do you debug a model that imagines the future? How do you curate petabyte-scale video datasets to capture the "long tail" of rare events without drowning in storage costs? And how do you ensure temporal consistency when your training data lives in scattered data lakes?

In this session, we explore technical workflows for the next generation of Visual AI. We will dissect the "Video Data Monster," demonstrating how to build feedback loops that bridge the gap between generative imagination and physical reality. Learn how leading teams are using federated data strategies and collaborative evaluation to turn video from a storage burden into a structured, queryable asset for embodied intelligence.

About the Speaker

Nick Lotz is chemical process engineer-turned-developer who is currently a Technical Marketing Engineer at Voxel51. He is particularly interested in bringing observability and security to all layers of the AI stack.

AI summary

By Meetup

Online, 60-minute hands-on workshop for robotics and autonomous-vehicle teams on validating world models and action-conditioned video; learn validation workflows.

Related topics

Artificial Intelligence
Artificial Intelligence Machine Learning Robotics
Computer Vision
Machine Learning
Open Source

Sponsors

StrongLoop

StrongLoop

An IBM company helps build node.js & APIs made for cloud.

You may also like