Skip to content

Interpretable AI

Photo of Neil Yorke-Smith
Hosted By
Neil Y.
Interpretable AI

Details

Aula Collegezaal A

Setup: two research talks (academia / industry), and drinks afterwards, sponsored by Delft Data Science.

+++

Nicola Pezzotti, Phillips Research, will speak on "Towards an Interpretable AI".

Machine learning plays an increasingly important role in several fields, from computer vision to biomedical data analysis. Modern machine learning models, often based on Deep Neural Networks, are now rivalling human accuracy in several pattern recognition problems. Compared to traditional approaches, where features are handcrafted, neural networks learn increasingly complex features directly from the data.

However, the complexity of the models makes it difficult to understand which computations are performed and how they will behave when deployed to process previously unknown data. For example, corporations like Google and Microsoft had negative press coverage due to the unexpected behaviour of their models.

Visual Analytics and dimensionality-reduction are key to understand the behaviour of trained models. In this talk I present techniques that allow to "open the black-box" and make the models interpretable. Moreover, I will present the more pressing challenges the Machine Learning and Visual Analytics communities are facing in the field of Interpretable AI.

Dr Jan van Gemert, head of the Computer Vision Lab, TU Delft, will speak on "Active Decision Boundary Annotation with Deep Generative Models".

This talk is on Active Learning where the goal is to reduce the data annotation burden by interacting with a (human) oracle during training. Standard active learning methods ask the oracle to annotate data samples. Instead, we take a profoundly different approach: we ask for annotations of the decision boundary. We achieve this using a deep generative model to create novel instances along a 1d line. A point on the decision boundary is revealed where the instances change class. Experimentally we show on three data sets that our method can be plugged into other active learning schemes, that human oracles can effectively annotate points on the decision boundary, that our method is robust to annotation noise, and that decision boundary annotations improve over annotating data samples.

+++

Drinks and networking

Photo of Delft AI Meetup group
Delft AI Meetup
See more events