Keep-Current :: Machine Learning Seminar #4 - Model Interpretability

Keep-Current Meetup
Keep-Current Meetup
Public group
Location image of event venue


Machine Learning Seminar #4 - Model Interpretability

Level: Advanced

In this series of seminars, we are diving deeper into understanding, approaching and working with Machine Learning algorithms from different perspectives.

These events are not a lectures, but rather discussions that aim to expand the know-how and the understanding of machine learning.

It is known that the best way to learn and understand something fully, is to teach it to others. Hence, this is an opportunity for you to stand out, demonstrate what you have learnt while simultanously deepening your knowledge in the field by teaching and explaining it to the other members in the group.

Yet, this is not a competition. Gaps in the material can and should be filled by other members in the group. We're here to learn from each other - without judging.


Our topic this evening is interpretability and explainable machine-learning models.

As our models gets more and more complex and non-linear, it becomes difficult to clearly understand how they function and the reasons that have lead to a specific result.

Many attempts were made so far in the field: Visualization of CNN neurons; Measuring feature importance using LIME, SHAP, TCAV, to name a few. These methods shed light on the feature importance for a specific prediction/inference and can help understanding if the model is biased.


The seminar format works best if you come prepared. Please check the reading list below and bring your own insights, questions, and perplexities to the table!

## Recommended reading list:

What exactly do we mean by model interpretability?

# Jenn Wortman Vaughan from Microsoft examines various aspects of interpretability from user perspective:

# Viktoria Krakovna from DeepMind discuss the importance of interpretability in reinforcement learning:

# An extended book about ML Interpretability (if you wish to dig deeper):

Analyzing feature-importance using a 'shadow' linear model:

# The Intriguing Properties of Model Explanations:

# lightning talk:
(Slides 23 - 27 - although the rest is interesting too ;) )




# Manifold

Various ML model Architectures:

# CNN dissection and Visualization:

# GAN dissection:

# Seq2Seq:

# NLP - Word & Sentence Embedding:

# Autonomous Vehicles:


As always, if you have more sources, please share them in the comments or the discussions/forum section of the meetup.

We look forward to seeing you!