PyData Berlin August Meetup


Details
Welcome to the August Virtual Meetup
The talks will start at 19:00
The link to the Zoom meeting will be sent to all attendees about an hour before the meetup and there will be a YouTube live stream for those not on the zoom call.
Talk 1 by Pan Kessel: Can explanations be trusted?
Abstract: Explanation methods in Machine Learning are on the rise. This is unsurprising as they promise to provide a tool to make blackbox algorithms transparent. This, in turn, can lead to increased trust and reliability. Furthermore, explanation methods are very simple to deploy as they are now integrated in standard deep learning libraries.
In this talk, I will however demonstrate that explanations have to be considered with care. This is because they can be easily manipulated to closely reproduce an almost arbitrary target explanation. The underlying mechanisms for this surprising degree of manipulablity can be theoretically understood using the mathematics of the General Theory of Relativity, i.e. Differential Geometry.
Reference:
https://papers.nips.cc/paper/9511-explanations-can-be-manipulated-and-geometry-is-to-blame
Bio: Pan Kessel is a member of the machine learning group at Technische Universität Berlin. He received his PhD in String Theory at the Max Planck Institute for Gravitational Physics. His main research interests currently are theoretically grounded explainable AI, generative models and their application to quantum physics, and the theory of learning.
Talk 2 by Manas Gaur and Kaushik Roy: Knowledge-infused Statistical Learning for Social Good Applications
Abstract: Humans are able to provide symbolic knowledge in structured form for potential use by an AI system in learning human desirable concepts. In clinical settings for instance, prediction of patient outcomes by an AI can be guided by knowledge from patient history. This history contains concepts such as treatment information, observational and drug-related information, mental health condition, and severity of disease/disorder. Additionally, there is also often a certain graphical structure to the knowledge among the concepts, for example, "patient symptoms cause certain tests to be taken", which in turn affects prescription of medication. This type of structure between human interpretable concepts contained in knowledge can aid the AI to an informed prediction.
References:
http://kidl2020.aiisc.ai/
http://wiki.aiisc.ai/index.php/Main_Page
Bio:
Manas Gaur is currently a Ph.D. student in the Artificial Intelligence Institute at the University of South Carolina. He has been Data Science and AI for Social Good Fellow with the University of Chicago and Dataminr Inc. His interdisciplinary research funded by NIH and NSF operationalizes the use of Knowledge Graphs, Natural Language Understanding, and Machine Learning to solve social good problems in the domain of Mental Health, Cyber Social Harms, and Crisis Response. His work has appeared in premier AI and Data Science conferences (CIKM, WWW, AAAI, CSCW), journals in science (PLOS One, Springer-Nature, IEEE Internet Computing), and healthcare-specific meetings (NIMH MHSR, AMIA).
Personal Webpage: https://manasgaur.github.io/
Kaushik Roy is currently a Ph.D. student in the Artificial Intelligence Institute at the University of South Carolina. He completed his master's in Computer Science at Indiana University Bloomington and has worked at UT Dallas’s starling lab. His research interests include Statistical Relational Artificial Intelligence, Knowledge graphs, Machine Learning, and Reinforcement Learning. His work has been featured at reputed conferences (IEEE, KR).
----------------------------------------------------------------------------------------------------
NumFOCUS Code of Conduct
https://numfocus.org/code-of-conduct
Please have a look at the comment section for the short version of our Code of Conduct.


PyData Berlin August Meetup