Skip to content

[ML] Explainable AI (XAI)

Photo of Ding Dong
Hosted By
Ding D. and 4 others
[ML] Explainable AI (XAI)

Details

Agenda:

18:30 - Food, drinks, networking

19:00 - Dean Allsopp - an overview of interpretability in machine learning

19:50 Short break

20:00 - Janis Klaise - Practical machine learning model interpretability with Alibi

21: 00 - Event close

Talk 1:

Being able to communicate how machine learning predictions are made can provide a foundation for fairness, accountability and transparency in their use. With complex models such as tree ensembles and neural networks there is a challenge in being able to communicate how specific predictions are made.
What open source machine learning interpretation tools are available now and how can they help? By looking at both techniques and tools this presentation aims to offer practical help with answering these questions about supervised ML:

-What sort of interpretations are provided?

-Who is likely to understand these interpretations?

-What interpretation packages work with which ML algorithms?

-How do the interpretation techniques work?

Bio: Dean Allsopp is a database programmer/architect turned data scientist. Aiming to help business use machine learning responsibly.

Talk 2:

Practical machine learning model interpretability with Alibi.

Being able to reason about the predictions of a machine learning system is becoming increasingly important as sophisticated, non-linear predictive models are being adopted across the enterprise and beyond.
In this talk I will discuss some requirements and challenges of model explanation algorithms and demo some practical examples using the open-source library we've developed at Seldon.

  • What makes an explanation interpretable?
  • The trade-off between interpretability and fidelity of an explanation algorithm
  • Practical examples of using some interpretable techniques (e.g. anchoring, counterfactual search) for classification of tabular data, text and images using the open-source library Alibi

Bio: Janis Klaise is a Data Scientist at Seldon primarily working on algorithms to provide rich information beyond raw predictions for live ML systems (e.g. model explanations, outlier detection, model confidence, concept drift).

Photo of GDG Reading & Thames Valley group
GDG Reading & Thames Valley
See more events
Central Working
Blagrave Street, Blagrave S · Reading