Despite the fact that neural network models outperform other approaches to text classification problems, some data scientists still hesitate to use them based on the assumption that interpreting the model’s decisions is impossible. In this talk, I will discuss the difference between global and local interpretability; what these concepts mean in the context of text classification models; and how to use two specific local interpretability methods, saliency and occlusion, to open the black box of a neural network. I will also touch on hierarchical attention networks, a neural network text classification model with built-in local interpretability in the form of attention.
Rebecca Jones is Data Scientist at Allstate Insurance Company, specializing in Natural Language Processing for the past two years. She has a graduate degree in Applied Mathematics from Northwestern University and an undergraduate degree in Political Science from the University of Chicago.
6:00 p.m - 6:30 p.m is time for social. Seminar will start at 6:30 p.m.
Our Sponsor: Metis Chicago ( https://www.thisismetis.com/ )