Title: Learning with Reflective Likelihoods
Abstract: Models parameterized by deep neural networks have achieved state-of-the-art results in many domains. These models are usually trained using the maximum likelihood principle with a finite set of observations. However, training deep probabilistic models with maximum likelihood can lead to the issue we refer to as input forgetting. In deep generative latent-variable models, input forgetting corresponds to posterior collapse---a phenomenon in which the latent variables are driven independent from the observations. However input forgetting can happen even in the absence of latent variables. We attribute input forgetting in deep probabilistic models to the finite sample dilemma of maximum likelihood. We formalize this problem and propose a learning criterion---termed reflective likelihood---that explicitly prevents input forgetting. We empirically observe that the proposed criterion significantly outperforms the maximum likelihood objective in several applications.
Speaker Bio: Adji Bousso Dieng is a PhD candidate at Columbia University where she works with David Blei and John Paisley. Her work at Columbia is about combining probabilistic graphical modeling and deep learning to design better generative models. She develops these models within the framework of variational inference which enables efficient and scalable learning. Her hope is that her research can be applied to many real world applications particularly to natural language understanding.
Prior to joining Columbia, Adji worked as a Junior Professional Associate at the World Bank. She did her undergraduate training in France where she attended Lycee Henri IV and Telecom ParisTech---France's Grandes Ecoles system. She holds a Diplome d'Ingenieur from Telecom ParisTech and spent the third year of Telecom ParisTech's curriculum at Cornell University where she earned a Master in Statistics.