DSPT#79 Webinar - Making Learning-based Visual Representations more Intelligible
Details
The most common and used ML models rely on the black box model, meaning that the model internal representation does not have to make sense to our human understanding.
This may lead to unwanted and unnoticeable bias, in our day and age that is a major disadvantage. With the help of our speaker Prof. José Oramas we try to pry open the black model that is a Neural Network.
The schedule for the webinar is the following:
• 18:30 - 18:45: Opening the meeting
• 18:45 - 19:30: Making Learning-based Visual Representations more Intelligible - José Oramas
• 19:30 - 19:45: Q&A
• 19:45: Closing
See you there!
========================
Abstract:
Representations learned via deep neural networks (DNNs) have achieved impressive results for several automatic tasks (image recognition, text translation, super-resolution, etc.). This has motivated the wide adoption of DNN-based methods, despite their black-box characteristics. In this talk, I will cover several efforts aiming at designing algorithms capable of revealing what type of information is encoded in a learned representation (model interpretation) and justifying the predictions made by a DNN (model explanation).
Bio:
José Oramas is an Assistant Professor at the Internet Data Lab (imec-IDLab) from the University of Antwerp. He received his PhD at the KU Leuven in 2015. During the last 12 years, he has conducted research on understanding how groups of elements from images interact and how the relationships between them can be exploited to improve several Computer Vision problems. Currently, his interests are focused on making AI systems more intelligible, i.e. being able to be understood by humans. This is achieved via exploratory/explanatory models that can identify informative intermediate representations and use them as a means to justify the predictions that they make.
