[AI_talks]: Can an AI Act Beyond the Screen?
Details
Can an AI Act Beyond the Screen?
GR00T, the Shift From Writing Code to Training Robots
We’re entering an era where AI doesn’t just generate, but it also acts.
Join us for a deep dive into the world of physical AI, where models learn to understand natural language, vision, and motion, enabling them to interact with the real world.
Our focus: NVIDIA’s GR00T, a Vision-Language-Action (VLA) model designed to help robots learn new tasks instead of being hand-coded. Together, we’ll explore what this shift means for the future of development, automation, and human-AI collaboration.
And because ideas are best tested in reality, you’ll also meet RebelBot, our robot powered by GR00T, in a demo that shows how cutting‑edge research can turn into practical robotics.
17:30 welcome_coffee
Meet us at Rebel Café on the 3rd floor and grab a warm one to start the evening right. After that, we’ll head up to the 7th floor for the session.
18:00 meetup_session
🎤 Can an AI Act Beyond the Screen?
Andreea Monea, AI Engineer at RebelDot, will guide us through the rise of Vision-Language-Action models, the GR00T framework, and how robots are learning to act in our world. Then, we'll attend a live demonstration of our RebelBot by Andrei Voic and Alexandru Luci, recent winners of the OpenAI Hackathon, evaluated by a jury that included representatives from NVIDIA, Ollama, Hugging Face, and others.
19:00 networking_and_explore_rebeldot
Stay around after the talk to connect with fellow AI enthusiasts, explore our office, and get a feel of life at RebelDot. Our teams will be around for a chat - whether you’re curious about our projects, culture, or just up for good conversations.