Sebastian Flennerhag | Towards machines that teach themselves
Details
Virtual London Machine Learning Meetup - 02.02.2022 @ 18:30
We would like to invite you to our next Virtual Machine Learning Meetup.
Agenda:
- 18:25: Virtual doors open
- 18:30: Talk
- 19:10: Q&A session
- 19:30: Close
Sponsors
https://evolution.ai/ : Machines that Read - Intelligent data extraction from corporate and financial documents.
- Title: Towards machines that teach themselves (Sebastian Flennerhag is a research scientist at DeepMind)
Papers: https://openreview.net/forum?id=b-ny3x071E5
http://proceedings.mlr.press/v70/finn17a.html
https://arxiv.org/abs/1803.02999
https://openreview.net/forum?id=HygBZnRctX
Abstract: Humans have the ability to shape their own learning rules with experience. In stark contrast, contemporary AI systems do not. Instead, they have to be either re-trained from scratch or manually fine-tuned as the data or the task changes, which both limits what AI systems can achieve and also introduces significant inefficiencies in commercial applications. This talk presents recent research in meta-learning that aims to bridge this gap by empowering AI systems to learn their own learning rules through open-ended discovery. We start from the standard meta-learning paradigm and discuss how it enables AI systems to learn a learning rule. A key limitation of this paradigm is that it requires the designer to explicitly specify a learning objective, thereby directly controlling what the AI system will learn to learn. Motivated by this limitation, we present a novel paradigm for learning to learn, Bootstrapped Meta-Learning. This paradigm enables an AI system to learn to learn by bootstrapping from what it already knows, thus opening up for open-ended discovery. Through a series of careful experiments, we study how meta-learning can yield significant benefits to an AI system and detail why bootstrapped meta-learning can push these gains much further.
Bio: Sebastian Flennerhag is a research scientist at DeepMind. His research focuses on large-scale meta-learning with applications in supervised learning and reinforcement learning. He is also actively pursuing research in continual and open-ended learning. Sebastian holds a Ph.D. in Machine Learning from the University of Manchester and has previously completed an MSc. in Economics at the Stockholm School of Economics.
The discussion will be facilitated by John (JD) Co-Reyes. John (JD) is a research scientist at Google Brain. His research focuses on various aspects of deep reinforcement learning including meta-learning new RL algorithms and architectures, building latent dynamics models for visual model based RL, and merging unsupervised objectives with open world environments to support the emergence of complex behavior. He completed his PhD in the Berkeley AI Research Lab (BAIR) studying deep reinforcement learning, advised by Sergey Levine. He has interned at Intel, Clarifai, and Google Brain where his work on Evolving Reinforcement Learning Algorithms received an oral presentation at ICLR. JD received his B.S. in Computer Science from Caltech.




