Trees, Oracles, Explanations

Join waitlist?

Respond by 12:00 PM on 1/23.

31 on waitlist.

Share:
Location image of event venue

Details

Please note that Photo ID will be required. Please can attendees ensure their meetup profile name includes their full name to ensure entry.

Agenda:
- 18:30: Doors open, pizza, beer, networking
- 19:00: First talk
- 19:45: Break & networking
- 20:00: Second talk
- 20:45: Close

*Sponsors*
Man AHL: At Man AHL, we mix machine learning, computer science and engineering with terabytes of data to invest billions of dollars every day.
Evolution AI: Build a state-of-the-art NLP pipeline in seconds.

* Trees, Oracles, Explanations – Using Ancient AI Wisdom to Make Sense of Modern ML Magic (Tarek R. Besold)

Abstract: The first part provides a brief introduction to the current state of interpretable/comprehensible/explainable AI and ML, clarifying the “state of play”: So called interpretable methods (like the popular LIME framework etc.) are not by default comprehensible to naïve users, but often require expert understanding to provide reliable information about the model. So called comprehensible methods aim to provide understandable information for users, but the actual model remains a black box. Explainable methods (turning the black box into a white-ish box, and being understandable to naïve users) are currently only in the very early stages of development.

The second part would then show our current work at Alpha revamping and updating an approach proposed in the early 90s (called TREPAN) which extracts decision trees from neural networks as better understandable representation. The decision tree approximates the actual statistics the neural network learned by using the trained network and the training data as an oracle. It, therefore, gives a close representation of the actual rules the neural network is computing.
We are now working on linking the decision tree generation to combine with domain ontologies, modifying the decision points to mirror actually “meaningful” elements from the application domain (and, thus, providing understanding cues for the user).

Bio: Tarek R. Besold, PhD is the AI Lead and a Senior Research Scientist at the Alpha Health AI Lab in Barcelona. Before that, he was a Lecturer/Assistant Professor in Data Science at City, University of London, conducting research at the intersection between artificial intelligence, computational creativity, and cognitive systems. Among others, Tarek was the General Chair of the HLAI 2016 and 2018 Joint Multi-Conferences on Human-Level Artificial Intelligence, and founder and/or organizer of several international workshop series bridging between AI and cognitive science. He was co-editor of the books “Computational Creativity Research: Towards Creative Machines” and “Concept Invention: Foundations, Implementations, Social Aspects and Applications”, and serves in different editorial functions for several scientific journals in AI and neighboring fields.