Drug Discovery and Safe Reinforcement Learning


Details
Agenda:
-
18:30: doors open, pizza, beer, networking
-
19:00: First talk
-
20:00: Break & networking
-
20:15: Second talk
-
21:30: Close
ML for Drug Discovery - Amir Saffari
Discovery in scientific domains has always been attributed to human genius and creativity. Recent advancements in AI is making it possible to design interactive and intelligent systems that could accelerate and augment human creativity by taking larger responsibility for different stages of scientific discovery process.
In this talk, I will discuss this area in details, explore the current state-of-the-art and its challenges, focus on the AI approaches from Deep Generative Models to Active and Reinforcement Learning that can be used for generating novel scientific hypothesis. I will also draw real-life examples from our current attempts at using these methods in the field of Drug Discovery.
Bio: Amir is director of Applied AI at Benevolent AI. BenevolentAI’s initial scientific focus has been in human health, specifically rare disease groups in often overlooked areas. In human health, BenevolentAI has harnessed its technology to make major breakthroughs and accelerate drug development.. The Company has entered into significant license agreements with some of the world’s largest pharmaceutical companies and is beginning its first Phase IIb clinical study in 2017.
Towards Safe Reinforcement Learning - Andreas Krause
Reinforcement learning has seen stunning empirical breakthroughs. At its heart is the challenge of trading exploration -- collecting data for learning better models -- and exploitation -- using the estimate to make decisions. In many applications, exploration is a potentially dangerous proposition, as it requires experimenting with actions that have unknown consequences. Hence, most prior work has confined exploration to simulated environments. In this talk, I will formalize the problem of safe exploration as one of optimizing an unknown reward function subject to unknown constraints. Both reward and constraints are revealed through noisy experiments, and safety requires that no infeasible action is chosen at any point. Starting with the bandit setting, where actions do not affect state transitions, I will discuss increasingly rich models that capture both known and unknown dynamics. Our approach uses Bayesian inference over the objective and constraints, and -- under some regularity conditions -- is guaranteed to be both safe and complete, i.e., converge to a natural notion of reachable optimum. I will also show experiments on safe automatic parameter tuning of robotic platforms, as well as safe exploration of unknown environments.
Bio: Andreas Krause is an Associate Professor of Computer Science at ETH Zurich, where he leads the Learning & Adaptive Systems Group. He also serves as Academic Co-Director of the Swiss Data Science Center. Before that he was an Assistant Professor of Computer Science at Caltech. He received his Ph.D. in Computer Science from Carnegie Mellon University (2008) and his Diplom in Computer Science and Mathematics from the Technical University of Munich, Germany (2004). He is a Microsoft Research Faculty Fellow and a Kavli Frontiers Fellow of the US National Academy of Sciences. He received an ERC Starting Investigator grant, the Deutscher Mustererkennungspreis as well as best paper awards at several premier conferences and journals.

Sponsors
Drug Discovery and Safe Reinforcement Learning