Diffusion Model - Connect Sessions
Details
ResearchTrend.AI Diffusion Model Connect Session: Physics & Foundation!
We are excited to announce our upcoming biweekly Diffusion Model (DiffM) Connect Session on ResearchTrend.AI!
This virtual session ๐ป features two essential presentations from leading researchers ๐งโ๐ฌ, focusing on solving key bottlenecks in video realism and probabilistic modeling.
Agenda (UTC) - Monday, December 8th
08:00 - 08:30: Yu Yuan
๐ Paper: NewtonGen: Physics-Consistent and Controllable Text-to-Video Generation via Neural Newtonian Dynamics
๐ก Abstract: State-of-the-art text-to-video models often produce physically inconsistent motions (e.g., objects falling upward). Yu Yuan introduces NewtonGen, a groundbreaking framework that integrates data-driven synthesis with Neural Newtonian Dynamics (NND). NND models and predicts Newtonian motions, injecting latent dynamical constraints to ensure physically consistent video synthesis with precise parameter control, moving T2V beyond simple appearance learning.
08:30 - 09:00: Jian Xu
๐ Paper: Neural Bridge Processes
๐ก Abstract: Diffusion Processes struggle with weak input coupling and endpoint mismatch in stochastic function modeling. Jian Xu introduces Neural Bridge Processes (NBPs), a novel method that reformulates the forward kernel to make inputs ($\mathbf{x}$) act as dynamic anchors for the entire diffusion trajectory. This approach guarantees endpoint coherence and provides stronger gradient signals, achieving substantial improvements in structured prediction tasks like EEG signal and image regression.
๐ This is a fantastic opportunity to engage directly with research that enhances both the applied realism of generative models and their mathematical foundation.
๐๏ธ Time: 8:00 AM - 9:00 AM UTC
๐ Location: Virtual
๐ Register for this event here: https://lnkd.in/ehSQ9Gvc
Don't miss our future sessions! ๐ Find out more about upcoming events: https://lnkd.in/g7-iczUp
