Big DataNights & Microsoft Reactor - Explainable AI #5: XAI in practice
Details
Direct link to join the session.
Explainable AI (XAI) Course #5: Explainable AI in practice
28 March, 2023 | 7:00 PM
Speakers: Liat Antwarg Friedman, Chen Galed and Odelia Melamed
Speakers Language: Hebrew
Virtual session.
Register here!
How to properly incorporate explanations in machine learning projects and what aspects should you keep in mind?
Over the past few years the need to explain the output of machine learning models has received growing attention. Explanations not only reveal the reasons behind models predictions and increase users' trust in the model, but they can be used for different purposes. To fully utilize explanations and incorporate them into machine learning projects the following aspects of explanations should be taken into consideration: explanation goals, the explanation method, and explanations’ quality. In this talk, we will discuss how to select the appropriate explanation method based on the intended purpose of the explanation. Then, we will present two approaches for evaluating explanations, including practical examples of evaluation metrics, while highlighting the importance of assessing explanation quality.
Next, we will examine the various purposes explanation can serve, along with the stage of the machine learning pipeline the explanation should be incorporated in.
Finally we will present a real use case of script classification as malware-related in Microsoft and how we can benefit from high-dimensional explanations in this context.
This session is part of a series of sessions in the XAI course.
This course is designed for data scientists that have at least two years of hands-on experience with machine learning and Python and a basic background in deep learning. The XAI course is managed on a voluntary basis by DataNights and Microsoft organizers and free for charge for the participant.
Course Leaders:
Bitya Neuhof, DataNights
Yasmin Bokobza, Microsoft
