Putting visual recognition in context


Details
Recent studies have shown that visual recognition networks can be fooled by placing objects in inconsistent contexts (e.g., a pig floating in the sky). This lecture covers two representative works modeling the role of contextual information in visual recognition. We systematically investigated critical properties of where, when, and how context modulates recognition.
In the first work, we focused on the study of the amount of context, context and object resolution, geometrical structure of context, context congruence, and temporal dynamics of contextual modulation on real-world images.
In the second work, we explored more challenging properties of contextual modulation including gravity, object co-occurrences and relative sizes in synthetic environments.
In both works, we conducted a series of experiments to gain insights into the impact of contextual cues on both human and machine vision:
- Psycho-physics experiments to establish a human benchmark for out-of-context recognition and then compare it with state-of-the-art computer vision models to quantify the gap between the two.
- We proposed new context-aware recognition models. The models captured useful information for contextual reasoning, enabling human-level performance and significantly better robustness in out-of-context conditions compared to baseline models across both synthetic and other existing out-of-context natural image datasets.
Lecture slides:
Part 1 - https://drive.google.com/file/d/1l-cPS-5dc6NdJ6Ho5jAvuJ0Jl38nTBkE/view?usp=sharing
Part 2 - https://drive.google.com/file/d/1jziRhGp5jxKUR-Ja8TwlaNb-IMNCkPdd/view?usp=sharing
Talk is based on the speakers' papers:
Putting visual object recognition in context (CVPR2020)
Paper: https://arxiv.org/abs/1911.07349
Git: https://github.com/kreimanlab/Put-In-Context
When Pigs Fly: Contextual Reasoning in Synthetic and Natural Scenes
Paper: http://arxiv.org/abs/2104.02215
Git: https://github.com/kreimanlab/WhenPigsFlyContext
Presenter BIO:
Philipp Bomatter is a master student for Computational Science and Engineering at ETH Zurich.
He is interested in artificial intelligence and neuroscience and currently works on a project concerning contextual reasoning in vision at the Kreiman Lab at Harvard University.
Mengmi Zhang completed her PhD in the Graduate School for Integrative Sciences and Engineering, NUS in 2019. She is now a postdoc in KreimanLab in Children's Hospital, Harvard Medical School.
Her research interests include computer vision, machine learning, and cognitive neuroscience. In particular, she studies high-level cognitive functions in humans including attention, memory, learning and reasoning from psychophysics experiments, machine learning approaches and neuroscience.
** ** Please register through the zoom link right after your RSVP. We will send the links to the zoom event via email only to those who have registered through zoom. ** **
-------------------------
Find us at:
All lectures are uploaded to our Youtube channel ➜ https://www.youtube.com/channel/UCHObHaxTXKFyI_EI8HiQ5xw
Newsletter for updates about more events ➜ http://eepurl.com/gJ1t-D
Sub-reddit for discussions ➜ https://www.reddit.com/r/2D3DAI/
Discord server for, well, discord ➜ https://discord.gg/MZuWSjF
Blog ➜ https://2d3d.ai
AI Consultancy -> https://abelians.com

Putting visual recognition in context