Skip to content

Details

Note: You will also need to register for the event at Skills Matter: https://skillsmatter.com/meetups/11244-london-data-science-journal-club

In many cases, understanding how models make their predictions is as important as the accuracy of their predictions. As models get more complex, various tools such as LIME and DeepLift have been developed to help users interpret the computations of their models.

This paper presents a unified framework for these interpretations, SHAP (SHapley Additive exPlanations). SHAP assigns importance values for each feature, for a given prediction.

The concept is based on the game theoretic concept of Shapley values, and provides the following results:

  1. The authors define the class of additive feature attribution methods, which unifies six current methods.
  2. Propose SHAP values as a unified measure of feature importance that various methods approximate
  3. Propose SHAP value estimation methods that better align with human intuition than existing methods.

The paper was presented at NIPS 2017.

Paper
A Unified Approach to Interpreting Model Predictions - Scott M. Lundberg, Su-In Lee
http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions

Resources:

A note about the Journal Club format:

  1. There is no speaker at Journal Club.
  2. There is NO speaker at Journal Club.
  3. We split into small groups of 6 people and discuss the papers. For the first hour the groups are random to make sure everyone is on the same page. Afterwards we split into blog/paper/code groups to go deeper.
  4. Volunteers sometimes seed the discussion by guiding through the paper highlights for 5 mins. You are very welcome to volunteer in the comments.
  5. Reading the materials in advance is really helpful. If you don't have time, please come anyway. We need this group to learn together.

Members are also interested in