Special Interest Group: Explainability


Details
Join AI LA and The AI Responsibility Lab Join for a crash course on AI Explainability. We'll cover why AI explainability is a challenge, how we overcome that challenge, and why it's one of the most important questions in AI, business, and society right now.
🚨 To get the Zoom link, register here. 🚨
https://us02web.zoom.us/meeting/register/tZMkfuigrjooGtc0MVCl7EWM5ogCPV8QjJxn
How do machines think?
This question is becoming less academic and more practical daily as AI Systems play a larger role in our lives. Critical to living harmoniously with AI is making sure we can trust it. And critical to trusting AI is our ability to explain and understand how it behaves. How do we explain how AI Systems make decisions? And how should you feel if we can't explain it? When it comes to thinking about smart machines, where should we start?
Reading list:
[Explainable AI in Industry] https://dl.acm.org/doi/abs/10.1145/3292500.3332281
[Stakeholders in Explainable AI]
https://arxiv.org/abs/1810.00184
[Explainable AI: The New 42?]
https://link.springer.com/chapter/10.1007/978-3-319-99740-7_21
[One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques]
https://arxiv.org/abs/1909.03012
[Asking ‘Why’ in AI: Explainability of intelligent systems – perspectives and challenges]
https://onlinelibrary.wiley.com/doi/epdf/10.1002/isaf.1422
~
The AI Responsibility Lab builds the right ideas and the right tools to help companies run world-class AI Responsibility change management programs.
For more information on how we can help you, contact us at airesponsibilitylab.com.

Special Interest Group: Explainability