Skip to content

AI & Explainability

Photo of Fabian Hadiji
Hosted By
Fabian H.
AI & Explainability

Details

CAIML #6 will happen on March 26, 2019. Thanks a lot to eyeo (https://eyeo.com) and fedger (http://fedger.co) for support in hosting and organizing the event!

This time, we will learn more about AI and explainability. So far, AI has focused a lot on machine learning where algorithms make predictions or decisions. Often, we do not understand why the algorithm is making a particular decision. However, in many cases understanding the algorithm is not only desired but also enforced by law. For that reason, there is an increasing wish to understand why an algorithm is making a particular prediction. At CAIML #6, we will talk about these questions and explanatory AI in detail.

CAIML #6 is supported by AI Spektrum (https://ai-spektrum.de), coparion (https://coparion.vc), Gaffel (https://www.gaffel.de) and goedle.io (https://goedle.io).

Agenda (tentative):

18:30 - Doors open

19:00 - Welcome

19:15 - Volker Kraft (JMP Sr. Academic Ambassador at SAS, https://www.linkedin.com/in/vkraft/): Machine Learning Makes it Predictive; How to Make it Explanatory?

Machine Learning is about creating predictive models. If the target (aka. response) is nominal, then a good model will maximize true positives and minimize false positives. This can provide information on dispositioning for instance. Designed experiments will use many of the same analysis algorithms but the focus is on causality and the appropriate recipe for success. JMP users may be more used to this paradigm so will often looking at models and ask "Why is the prediction so high? How do we improve it? What is important about the business?” These questions are asking predictive models to become explanatory. The difference between predictive and root cause can be overlooked; yet we know correlation does not imply causation. The art of explaining data is getting lost in the push for advanced modeling techniques. These questions can be hard to answer: restricted ranges on the variables, multicollinearity can make it very difficult to go from a predictive model to an explanatory model. Further, numeric summaries of models do not encourage subject matter experts to ask questions. The humble Profiler in JMP is a powerful tool in making models talk. Even for a data miner, interactive visualization can make a difference.

19:45 - Break with drinks 🍻 and pizza 🍕 brought to you by coparion and Gaffel

20:15 - Oleksandr Paraska (Machine Learning Engineer at eyeo GmbH, https://www.linkedin.com/in/shoniko/): The Curse of Explainability

Distributed representations in deep models makes them notoriously hard to explain, but because of their inherent usefulness we still have to develop approaches to relying on them. We will have a hands-on look at the story of one experiment at eyeo GmbH (the company behind Adblock Plus) and how pursuit of explainability leads organizations to find out what are the correct questions to ask about their products. We will look at the problem of explainability from the perspective of adversarial robustness, present examples from our experiments and talk about how explainability can be both good and bad thing in adversarial scenarios.

20:45 - Closing & Networking

Photo of Cologne AI and Machine Learning Meetup group
Cologne AI and Machine Learning Meetup
See more events
eyeo GmbH
Lichtstraße 25 · Köln