
What weโre about
ResearchTrend.AI Connect brings together AI researchers and practitioners for informal yet focused paper discussions. Each session features paired speakers presenting recent work across machine learning, vision, and language. We emphasise open dialogue, depth, and collaboration rather than passive listening. Join to discover new research directions, meet peers across academia and industry, and exchange ideas shaping the future of AI.
Upcoming events
3
โขOnlineVision Language Models - Connect Session
OnlineNext ResearchTrend.AI VLM Connect Session: Focus on Efficiency & Structure!
This virtual session ๐ป features two presentations from leading researchers ๐งโ๐ฌ, tackling critical issues in modern AI model deployment and understanding.
Agenda (UTC)
07:00 - 07:30: Mengting Ai (University of Illinois Urbana-Champaign)
๐ Paper: NIRVANA: Structured pruning reimagined for large language models compression
๐ก Abstract: Structured pruning often compromises zero-shot accuracy in LLMs. Mengting introduces NIRVANA, a theoretically sound method that uses a Neural Tangent Kernel-derived saliency and adaptive sparsity to effectively compress models (like Llama3 and Qwen) while preserving crucial performanceโoffering a major step toward practical LLM efficiency.
07:30 - 08:00: Ankit Sonthalia (Scalable Trustworthy AI)
๐ Paper: On the rankability of visual embeddings
๐ก Abstract: Ankit explores the concept of rankabilityโwhether visual embeddings naturally align along linear directions that capture ordinal attributes (e.g., age, aesthetics). His findings reveal that this structure is often inherent, and meaningful rank axes can be recovered with minimal data, promising major improvements for image ranking systems.
๐ Don't miss this opportunity to dive into research that makes AI models smarter, leaner, and more interpretable.
๐๏ธ Time: 7:00 AM - 8:00 AM UTC ๐ Location: Virtual
๐ Register for this event here: https://lnkd.in/eM5wPGHK
Don't miss our future sessions! ๐ Find out more about upcoming events: https://lnkd.in/g7-iczUp
hashtag#VisionLanguageModels hashtag#VLM hashtag#AI hashtag#MachineLearning hashtag#DeepLearning hashtag#Research hashtag#Webinar hashtag#VirtualEvent hashtag#LLMCompression hashtag#ModelEfficiency hashtag#StructuredPruning hashtag#VectorDatabases hashtag#VisualEmbeddings4 attendees
โขOnlineDiffusion Models - Connect Session
OnlineResearchTrend.AI Diffusion Model Connect Session: 3D Video & Geometric Foundations!
We are excited to announce our upcoming biweekly Diffusion Model (DiffM) Connect Session on ResearchTrend.AI!
This virtual session ๐ป features two presentations from leading researchers ๐งโ๐ฌ, diving deep into the technical advancements and theoretical underpinnings of Diffusion Models.
Agenda (UTC) - Monday, November 24th
08:00 - 08:30: Geonung Kimg
๐ Paper: VideoFrom3D: 3D Scene Video Generation via Complementary Image and Video Diffusion Models
๐ก Abstract: Generating high-fidelity 3D scene videos is challenging due to the difficulty of jointly modeling visual quality, motion, and temporal consistency. Geonung will present VideoFrom3D, a novel framework that uses the complementary strengths of image and video diffusion models. It synthesizes high-quality, style-consistent videos from coarse 3D geometry and a camera path, streamlining 3D graphic design workflows without needing paired 3D/natural image datasets.
08:30 - 09:00: Xiang Li
๐ Paper: When Scores Learn Geometry: Rate Separations under the Manifold Hypothesis
๐ก Abstract: Score-based methods (like diffusion models) are usually viewed as learning the full data distribution. Xiang will propose an alternative, groundbreaking perspective: their success comes from implicitly learning the data manifold (geometry). He reveals a sharp separation of scales, showing that learning the geometry is O(ฯโ2) stronger than learning the distribution, suggesting a paradigm shift from demanding distributional learning to more robust geometric learning.
๐ This is a fantastic opportunity to engage directly with new, high-impact research at the frontier of generative AI and its theoretical limits.
๐๏ธ Time: 8:00 AM - 9:00 AM UTC
๐ Location: Virtual
๐ Register for this event here: https://lnkd.in/eM5wPGHK
Don't miss our future sessions! ๐ Find out more about upcoming events: https://lnkd.in/g7-iczUp
hashtag#DiffusionModels hashtag#GenerativeAI hashtag#3DGraphics hashtag#VideoGeneration hashtag#AIResearch hashtag#MachineLearning hashtag#DeepLearning hashtag#Webinar hashtag#VirtualEvent hashtag#GeometricLearning hashtag#ScoreBasedModels hashtag#ManifoldHypothesis2 attendees
โขOnlineVideo Generation - Connect Sessions
OnlineWe are excited to announce our upcoming biweekly Video Generation (VGen) Connect Session on ResearchTrend.AI!
This virtual session ๐ป features two presentations from leading researchers ๐งโ๐ฌ, diving deep into novel techniques for generating complex, high-quality, and temporally controlled videos.
Agenda (UTC) - Monday, November 24th
09:00 - 09:30: Junkun Chen
๐ Paper: Virtual Fitting Room: Generating Arbitrarily Long Videos of Virtual Try-On from a Single Image โ Technical Preview
๐ก Abstract: Generating long, consistent virtual try-on videos is a major challenge. Junkun introduces the Virtual Fitting Room (VFR), a pioneering video generative model that produces arbitrarily long virtual try-on videos. VFR uses an auto-regressive, segment-by-segment process, combined with prefix video conditions and a 360-degree anchor video, to ensure unprecedented local smoothness and global temporal consistency under various motions.
09:30 - 10:00: Jibin Song
๐ Paper: Syncphony: Synchronized Audio-to-Video Generation with Diffusion Transformers
๐ก Abstract: Achieving fine-grained motion synchronization in generated videos remains difficult. Jibin will present Syncphony, a method that generates high-resolution, 24fps videos perfectly synchronized with diverse audio inputs. Syncphony uses a Motion-aware Loss and Audio Sync Guidance to exploit audio cues effectively at inference, setting a new state-of-the-art in both synchronization accuracy (measured by the new CycleSync metric) and visual quality.
๐ This is a fantastic opportunity to engage directly with new, high-impact research driving the future of controllable and long-form video generation.
๐๏ธ Time: 9:00 AM - 10:00 AM UTC ๐ Location: Virtual
๐ Register for this event here: https://lnkd.in/eM5wPGHK
Don't miss our future sessions! ๐ Find out more about upcoming events: https://lnkd.in/g7-iczUp
hashtag#VideoGeneration hashtag#VGen hashtag#GenerativeAI hashtag#VirtualTryOn hashtag#TemporalConsistency hashtag#VideoEditing hashtag#AudioToVideo hashtag#DiffusionModels hashtag#AIResearch hashtag#MachineLearning hashtag#DeepLearning hashtag#Webinar hashtag#VirtualEvent1 attendee
Past events
2

