ML * Privacy * 2


Details
The event is online: https://www.youtube.com/watch?v=EEDu85AkwMI&feature=youtu.be
----
Talk I: Does machine learning threaten privacy?
Speaker I: Verena Battis, Fraunhofer Institute for Secure Information Technology
Abstract: Methods of machine learning have become an integral part of our everyday life. In many cases, private and/or sensitive information is used to train these models. Until recently, it was assumed that it was not possible to infer from the final model the data used for training. However, recent research has shown that this assumption is a fallacy. This talk will address the privacy threats posed by machine learning techniques and the questions that arise from them – e.g. can an attacker extract private training data from a trained model? Is it possible to steal a model by simple query access?
Bio: After completing her master's degree in statistics at the University of Trier, Verena has been working as a research associate at the Fraunhofer Institute for Secure Information Technology (SIT) since March 2019. Her focus is on research into the risks to the privacy of individuals through the use of modern machine learning methods - e.g. neural networks -, the fundamentals that allow for those privacy threats in the first place, and ways to mitigate them.
----
Talk II: Privacy-preserving Machine Learning
Speaker II: Franziska Boenisch, Fraunhofer Institute for Applied and Integrated Security
Abstract: With the growing amount of data being collected about individuals, ever more complex machine learning models can be trained based on those individuals’ characteristics and behaviors. Methods for extracting private information from the trained models become more and more sophisticated, such that individual privacy is threatened. In this talk, I will introduce some powerful methods for training neural networks with privacy guarantees. I will also show how to apply those methods effectively in order to achieve a good trade-off between utility and privacy.
Bio: Franziska has completed a Master’s degree in Computer Science at Freie University Berlin and Technical University Eindhoven. For the past 1,5 years, she has been working at Fraunhofer AISEC as a Research Associate in topics related to Privacy Preserving Machine Learning, Data Protection, and Intellectual Property Protection for Neural Networks. Additionally, she is currently doing her PhD in Berlin.

ML * Privacy * 2