Real Time Video AI Summit
Details
# Real Time Video AI Summit by Daydream
Join us for the first of its kind gathering focused on the future of open, real-time video AI technology.
This one-day summit brings together researchers, builders, and creative technologists who are shaping what’s next in video. From research discussions on core advances like Self-Forcing and StreamV2V to workflows and live demos, this program highlights the great minds and tools driving the real-time video space forward.
***
🫂 Featured speakers
- Xun Huang: Prof. at CMU and Author of Self-Forcing
- Chenfeng Xu: Prof. at UT Austin and Author of StreamDiffusion
- Jeff Liang: Researcher at Meta and Author of StreamV2V
- Cerspence: Creative Technologist and Creator of ZeroScope
- DotSimulate: Creative Technologist and Creator of StreamDiffusionTD
- Yondon Fu: Applied Researcher and Creator of Scope
- RyanOnTheInside: Applied Researcher on StreamDiffusion and ComfyUI
…and more to be announced!
***
🗓 Agenda Overview
Morning: Keynotes & research talks including Self-Forcing and StreamV2V
Midday: Best practices, live demos, hands-on workshops, and a community panel
Afternoon: Lightning talks, creative showcases, and a Artist × Infra × Research panel
Evening: Closing keynote + community drinks
***
🕘 Summit Information
Where: San Francisco
When: October 20, 2025 during AI Open Source Week
***
❤️ About your host:
Daydream is an open, cloud-based platform for real-time generative video, built on top of the Livepeer decentralized GPU network.
It gives developers and researchers hosted diffusion model APIs (SDXL, SD 1.5, SD-Turbo), SDKs, and ready made real-time video pipelines such as StreamDiffusion.
By combining open-source research (e.g., StreamDiffusion, ControlNet, IPAdapter) with scalable GPU infrastructure, Daydream lets creators, engineers, and studios prototype, build, and deploy real-time AI video applications without managing their own GPU servers.