Interpretable Vision and Language Models


Details
This event is part of the Helmholtz Imaging Annual Conference 2024 in Heidelberg. This conference is tailored for scientists and researchers engaged in imaging research or utilizing imaging techniques.
We're excited to share one of the keynotes of this conference with the Heidelberg AI community! Participants of this Heidelberg AI talk don't have to be registered with the conference!
Explainable AI (XAI) addresses one of the most critical concerns in the adoption of AI technologies: transparency. XAI seeks to make the decision-making processes of AI systems clear and understandable to human users. This transparency is vital for building trust, particularly in sensitive areas such as healthcare, finance, and autonomous driving, where understanding AI’s decision process is crucial for acceptance and ethical considerations.
This research is particularly crucial when applied to large vision-language models, which are increasingly used to handle complex tasks that involve understanding and generating content from both visual and textual data.
Tuesday, May 14th, 11:15 am
Event details:
[Heidelberg.ai](https://heidelberg.ai/2024/05/14/zeynep_akata.html)
Announcement
Professor Akata will not be able to attend the conference in person.
Therefore, her talk will be live-streamed at the event.
Abstract
In this talk, Professor Zeynep Akata delves into the transformative impacts of representation learning, foundation models, and explainable AI on machine learning technologies. She highlights how these approaches enhance the adaptability, transparency, and ethical alignment of AI systems across various applications. Professor Akata will address the synergy between these technologies and their crucial role in advancing AI, aiming to make these complex systems more accessible and understandable.
(We will share more info on this talk shortly.)
Bio
Zeynep Akata is the Director of the Institute of Explainable Machine Learning and a Professor of Computer Science at the Technical University of Munich. Her research focuses on making AI-based systems more transparent and accountable, particularly through explainable, multi-modal, and low-shot learning in computer vision. She has held positions at the University of Tübingen and Max Planck Institutes, and her notable recognitions include the Lise Meitner Award for Excellent Women in Computer Science, the ERC Starting Grant and the German Pattern Recognition Award. For more details, you can visit her profile.

Interpretable Vision and Language Models