**You must register HERE (https://www.eventbrite.co.uk/e/creative-ai-meetup-20-natasha-jaques-sander-dieleman-registration-49664028446) on Eventbrite to confirm your spot**
Natasha Jaques, Ph.D. Candidate at the MIT Media Lab, Research Intern at DeepMind
"Learning via social awareness: Improving a deep generative sketching model with facial expression feedback"
In the quest towards artificial general intelligence (AGI), researchers have explored training models using reward functions that act as intrinsic motivators in the absence of external rewards. This paper argues that such research has overlooked an important and useful intrinsic motivator: social interaction. We posit that making an AI agent aware of implicit social feedback from humans can allow for faster, more generalizable learning, and could potentially impact AI safety. To this end, we collect social feedback in the form of facial expression reactions to samples from Sketch RNN, an deep learning model designed to produce sketch drawings. We use a Latent Constraints GAN (LC-GAN) to learn from the facial feedback of a small group of viewers, by optimizing the model to produce sketches that it predicts will lead to more positive facial expressions. We show in multiple independent evaluations that the model trained with facial feedback produced sketches that users rate as subjectively more pleasing, and which induce more smiling and less frowning. Thus, we establish for the first time that implicit social feedback can improve the output of a deep learning model.
Natasha Jaques is a 4th year PhD Candidate at the MIT Media Lab working on Affective Machine Learning. She is currently interning with DeepMind, and has twice interned previously with Google Brain. Natasha is currently participating as one of the mentors for the OpenAI Scholars program. Her research is focused on endowing deep learning models with intrinsic affective and social motivation in order to improve performance on a variety of tasks. In 2016, she won best paper at the NIPS ML for Healthcare workshop for her work on using multi-task learning to personalize prediction of happiness, stress, and health. Her work on improving sequence generation models using reinforcement learning was featured in the NIPS 2016 best demo and published in ICML 2017. Natasha received a M.Sc. in Computer Science (CS) from the University of British Columbia, and a B.Sc. Honours in CS and B.A. in Psychology from the University of Regina.
Sander Dieleman, Research Scientist, DeepMind
"Generating music in the raw audio domain"
Realistic music generation is a challenging task. When machine learning is used to build generative models of music, typically high-level representations such as scores, piano rolls or MIDI sequences are used that abstract away the idiosyncrasies of a particular performance. But these nuances are very important for our perception of musicality and realism, so we embark on modelling music in the raw audio domain. I will discuss some of the advantages and disadvantages of this approach, and the challenges it entails.
Sander Dieleman is a Research Scientist at DeepMind in London, UK, where he has worked on the development of AlphaGo and WaveNet. He was previously a PhD student at Ghent University, where he conducted research on feature learning and deep learning techniques for learning hierarchical representations of musical audio signals. During his PhD he also developed the Theano-based deep learning library Lasagne and won solo and team gold medals respectively in Kaggle's "Galaxy Zoo" competition and the first National Data Science Bowl. In the summer of 2014, he interned at Spotify in New York, where he worked on implementing audio-based music recommendation using deep learning on an industrial scale.
The schedule for the evening will be as follows.
6.30pm - 7pm Arrive
7pm - 7.10pm Introduction
7.10pm-7.50pm First talk - Natasha Jaques
7.50pm-8.30pm Second talk - Sander Dieleman