Explainable AI – Opening the Black Box of Machine Learning Models

This is a past event

104 people went

Location image of event venue

Details

QuantumBlack is an advanced analytics firm operating at the intersection of strategy, technology and design to improve performance outcomes for organisations. For the first time in Australia, we are excited to open the doors of our Experience Studio to the data science community for our inaugural Meet-Up in Sydney!

For this first Meet-Up, we’ve challenged ourselves to find a topic that strikes the balance between the ‘new’ and the ‘impactful’ and we’ve honed in on the theme of black boxes. Most machine learning models are black boxes but that doesn’t make it easy for us to build the necessary trust and transparency businesses need to grow to the next level.

In this meetup, we’ll show you how to lift the lid on these black boxes and make your models more actionable and trustworthy with Explainable AI! We’ll share practical methods that we use and how we apply them in our day to day work. We'll show you how we use global and local explainable models on the examples of LIME and SHAP.

We are also delighted to have an incredible guest speaker join us, Roman Marchant from Centre for Translational Data Science at University of Sydney, who will elaborate on how to use Bayesian methodology for Explainable AI and build practical models that are both flexible and interpretable.

This is one that’s not to be missed if you’re keen to find out more about Explainable AI – please RSVP! All drinks and food will be provided.

Bio: Roman Marchant
Roman obtained his PhD at the University of Sydney. His current research at the Centre for Translational Data Science explores applying data science to the social sciences, currently focusing on predicting crime and understanding criminal behaviour. His area of expertise is Sequential Bayesian Optimisation (SBO), which is a novel probabilistic method for finding the optimal sequence of decisions that maximise a long-term reward.

Abstract:
In this talk I will elaborate on the need for explainable and transparent models. By using statistical models and Bayesian methodology it is possible to generate models which are simultaneously flexible and interpretable. I will show in practice how these are derived in collaboration with domain experts and present the current options for learning patterns from the data by estimating the parameters. I will conclude by showing a real world examples of the applicability of explainable AI, including an interpretable model for the occurrence of crime.