[webMeetup] From Text & Graphs to Explainable New Knowledge

![[webMeetup] From Text & Graphs to Explainable New Knowledge](https://secure.meetupstatic.com/photos/event/5/e/f/7/highres_494244311.webp?w=750)
Details
Join the 1st NLP Zurich tech shindig for 2021 online! Carolin Lawrence (NEC Labs Europe) will take us on a journey from text & graphs to explainable new knowledge.
Explaining the predictions of an AI system is important, especially for systems where user trust is crucial. We present Gradient Rollback, which can explain a prediction of a neural network by returning the training instances that caused this prediction to become likely.
We are looking forward to welcoming you!
Agenda:
17:55 Join the webinar
18:00 Carolin Lawrence (NEC Labs Europe): From Text & Graphs to Explainable New Knowledge
18:35 Q&A
18:50 Virtual Hugs and Kisses ⊂(◉‿◉)つ
Talk Summary:
We live in a world where increasingly more and more data is available. Even for questions on a narrow topic, it has become impossible for humans to sift through the amassed data. How can we use AI to leverage this wealth of information to improve our world? At NEC Labs Europe, we arrange data in knowledge graphs, where we can express how different elements of a topic interact with each other.
Knowledge graphs can either be constructed by directly using available structured data or by using NLP to extract relevant information from text. Once a graph is constructed, we use Graph AI technology to infer new insights that can answer the questions posed by human users.
The best performing Graph AI models are neural networks. However, neural networks are black-boxes and humans cannot understand how the models arrive at their predictions. But explaining a prediction is crucial, especially in situations where user trust is paramount.
To explain predictions of a neural network, we developed Gradient Rollback (GR). We apply GR to neural matrix factorization models, which are commonly applied to graphs for knowledge base completion and recommender systems. In this setting, we can show that GR has a theoretical guarantee that its approximation error is smaller than known bounds on the stability of stochastic gradient descent. In empirical experiments, we establish that GR identifies explanations that are faithful, which means that the explanations truly represent how the model arrived at the prediction.
About the Speaker:
Carolin Lawrence (https://carolinlawrence.github.io/) is a research scientist at NEC Labs Europe (https://www.neclab.eu/), where she works on natural language processing and graph structured data. She particularly cares about making AI explainable to ensure that research leads to a positive impact. Carolin holds a PhD in computational linguistics from the University of Heidelberg.
Website: https://carolinlawrence.github.io/
Twitter: https://twitter.com/caro__lawrence
NLP Zurich:
Meetup.com: https://www.meetup.com/NLP-Zurich/
Linkedin: https://www.linkedin.com/company/nlp-zurich
Twitter: https://twitter.com/nlp_zurich

[webMeetup] From Text & Graphs to Explainable New Knowledge