Skip to content

Details

Welcome to the Best of ICCV series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you.

Date, Time and Location

Nov 20, 2025
9 AM Pacific
Online. Register for the Zoom!

SGBD: Sharpness-Aware Mirror Gradient with BLIP-Based Denoising for Robust Multimodal Product Recommendation

The growing integration of computer vision and machine learning into the retail industry—both online and in physical stores—has driven the adoption of multimodal recommender systems to help users navigate increasingly complex product landscapes. These systems leverage diverse data sources, such as product images, textual descriptions, and user-generated content, to better model user preferences and item characteristics. While the fusion of multimodal data helps address issues like data sparsity and cold-start problems, it also introduces challenges such as information inconsistency, noise, and increased training instability.

In this paper, we analyze these robustness issues through the lens of flat local minima and propose a strategy that incorporates BLIP—a Vision-Language Model with strong denoising capabilities—to mitigate noise in multimodal inputs. Our method, Sharpness-Aware Mirror Gradient with BLIP-Based Denoising (SGBD), is a concise yet effective training strategy that implicitly enhances robustness during optimization. Extensive theoretical and empirical evaluations demonstrate its effectiveness across various multimodal recommendation benchmarks. SGBD offers a scalable solution for improving recommendation performance in real-world retail environments, where noisy, high-dimensional, and fast-evolving product data is the norm, making it a promising paradigm for training robust multi-modal recommender systems in retail industry.

About the Speaker

Kathy Wu holds a Ph.D. in Applied Mathematics and dual M.S. degrees in Computer Science and Quantitative Finance from the University of Southern California (USC), Los Angeles, CA, USA. At USC, she served as a course lecturer, offering ML Foundations and ML for Business Applications in the science school and business school. Her academic research spans high-dimensional statistics, deep learning, and causal inference, etc.

Kathy brings industry experience from Meta, LinkedIn, and Morgan Stanley in the Bay Area and New York City, US, where she focused on AI methodologies and real-world applications. She is currently an Applied Scientist at Amazon, within the Global Store organization, leading projects in E-Commerce Recommendation Systems, Search Engines, Multi-Modal Vision-Language Models (VLMs), and LLM/GenAI in retails.

Her work has been published in top-tier conferences including ICCV, CVPR, ICLR, SIGIR, WACV, etc. At ICCV 2025, she won the Best Paper Award in Retail Vision.

Spatial Mental Modeling from Limited Views

Can VLMs imagine the unobservable space from just a few views, like humans do? Humans form spatial mental models, as internal representations of "unseen space" to reason about layout, perspective, and motion. On our proposed MINDCUBE, we see critical gap systematically on VLMs building robust spatial mental models through representing positions (cognitive mapping), orientations (perspective-taking), and dynamics (mental simulation for ''what-if'' movements). We then explore three approaches to help VLMs approximate spatial mental models, including unseen intermediate views, natural language reasoning chains, and cognitive maps.

The significant improvement comes from ''map-then-reason'' that jointly trains the model to first abstract a cognitive map and then reason upon it. By training models to construct and reason over these internal maps, we boosted accuracy from 37.8% to 60.8% (+23.0%). Adding reinforcement learning pushed performance even further to 70.7% (+32.9%). Our key insight is that such scaffolding of spatial mental models, actively constructing and utilizing internal structured spatial representations with flexible reasoning processes, significantly improves understanding of "unobservable space".

We aim to understand why geometric concepts remain challenging for VLMs and outlining promising research directions towards fostering more robust spatial intelligence.

About the Speaker

Manling Li is an Assistant Professor at Northwestern University and Amazon Scholar. She was a postdoc at Stanford University, and obtained the PhD degree in Computer Science at University of Illinois Urbana-Champaign in 2023. She works on the intersection of language, vision, and robotics, recognized by the MIT TR 35 Under 35, ACL Inaugural Dissertation Award Honorable Mention, ACL’24 Outstanding Paper Award, ACL'20 Best Demo Paper Award, and NAACL'21 Best Demo Paper Award, Microsoft Research PhD Fellowship, EE CS Rising Star, etc.

Forecasting and Visualizing Air Pollution via Sky Images and VLM-Guided Generative Models

Air pollution monitoring is traditionally limited by costly sensors and sparse data coverage. Our research introduces a vision-language model framework that predicts air quality directly from real-world sky images and also simulates skies under varying pollution levels to enhance interpretability and robustness. We further develop visualization techniques to make predictions more understandable for policymakers and the public. This talk will present our methodology, key findings, and implications for sustainable urban environments.

About the Speaker

Mohammad Saleh Vahdatpour is a PhD candidate in Computer Science at Georgia State University specializing in deep learning, vision–language models, and sustainable AI systems. His research bridges generative AI, environmental monitoring, and motion perception, focusing on scalable and energy-efficient models that connect scientific innovation with real-world impact.

Sari Sandbox: A Virtual Retail Store Environment for Embodied AI Agents

We present Sari Sandbox, a high-fidelity, photorealistic 3D retail store simulation for benchmarking embodied agents against human performance in shopping tasks. Addressing a gap in retail-specific sim environments for embodied agent training, Sari Sandbox features over 250 interactive grocery items across three store configurations, controlled via an API. It supports both virtual reality (VR) for human interaction and a vision language model (VLM)-powered embodied agent.

We also introduce SariBench, a dataset of annotated human demonstrations across varied task difficulties. Our sandbox enables embodied agents to navigate, inspect, and manipulate retail items, providing baselines against human performance. We conclude with benchmarks, performance analysis, and recommendations for enhancing realism and scalability.

About the Speakers

Emmanuel G. Maminta is a fourth-year Artificial Intelligence Ph.D. student at the Ubiquitous Computing Laboratory (UCL) in the University of the Philippines Diliman, advised by Prof. Rowel O. Atienza.

Janika Deborah B.Gajo is an undergraduate student studying for a Bachelor of Science in Computer Engineering at the University of the Philippines, Diliman.

Artificial Intelligence
Computer Vision
Machine Learning
Data Science
Open Source

Members are also interested in