Skip to content

Explaining the predictions of any machine learning classifier

Photo of Iver Jordal
Hosted By
Iver J. and William N.
Explaining the predictions of any machine learning classifier

Details

LIME (Local Interpretable Model-agnostic Explanations), is an algorithm for explaining why a classifier made the prediction it did. It works for any type of classifier (decision tree, neural network...). The algorithm finds which parts of the input (for example words for a text input) are most important when the ML classifier decides which class it should be assigned to. A fun intro video and the github package can be found here:

https://www.youtube.com/watch?v=hUnRCxnydCc
https://github.com/marcotcr/lime/

In the talk William Naylor, from Inmeta, will discuss what the algorithm is and how it works. Then give some concrete examples of it in use, and some ideas for use. We will also discuss the various pitfalls of the algorithm, and the follow-up algorithm: anchors.

Intended audience: People who want to learn more about the technical details on trust and explainability/interpretability in AI.

Pizza will be provided, and drinks can be bought in the bar. Note that this meetup takes place in the bar, not in the meeting room upstairs.

Photo of Trondheim Machine Learning Meetup group
Trondheim Machine Learning Meetup
See more events
Work-Work
Munkegata 58 · Trondheim