Skip to content

Interpretable AI Models

I
Hosted By
Ian M.
Interpretable AI Models

Details

For this event, we will be focusing on explainable and interpretable AI. In the past several years, artificial neural networks (ANNs), and particularly deep neural networks, have reemerged and experienced a rapid surge in use, thanks to technological developments which have made them much more feasible. However, despite their ease of implementation, such models remain largely inscrutable, making it difficult (or impossible) to understand and explain how they come to conclusions. Our discussion will focus on the risks and potential consequences of relying on opaque models; whether there are ever situations where the benefits of an opaque model outweigh these risks; and proposed solutions and alternatives. As always, people of all backgrounds, experiences, and perspectives are welcome to join in the discussion.

Our discussion will be based in part on the following articles (reading the articles is not necessary to attend, but is recommended for anyone who wants more background on this topic):

Samek, W., T. Wiegand, & K. Muller. (2017). Explainable Artificial Intelligence: Understanding, Visualizing, and Interpreting Deep Learning Models. https://arxiv.org/pdf/1708.08296.pdf

Carabantes, M. (2019). Black-box artificial intelligence: an epistemological and critical analysis. https://sci-hub.tw/10.1007/s00146-019-00888-w

Rudin, C. (2019). Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and UseInterpretable Models Instead https://arxiv.org/pdf/1811.10154.pdf

Miller, T. (2018). Explanation in Artificial Intelligence: insights from the Social Sciences. https://arxiv.org/pdf/1706.07269.pdf

Doshi-Velez, F. & B. Kim. (2017). Towards a Rigorous Science of Interpretable Machine Learning. https://arxiv.org/pdf/1702.08608.pdf

Photo of Responsible AI Meetup group
Responsible AI Meetup
See more events
Passione Cafe
2049 Shattuck Square · Berkeley, CA