What we're about

Deep learning is a rapidly growing field with dozens (editor: hah) of new publications each week on Arxiv. This group is a time set aside to go over interesting research from the previous week. We'll pick a one or a few papers to read and discuss.

Upcoming events (1)

Top-K Off-Policy Correction for a REINFORCE Recommender System

This session we will discuss the research paper: ## Top-K Off-Policy Correction for a REINFORCE Recommender System https://arxiv.org/abs/1812.02353 Everyone should take time to read the paper in detail several days in advance of the meetup, and to the greatest extent possible, read the key references from the paper. Full Abstract: Industrial recommender systems deal with extremely large action spaces – many millions of items to recommend. Moreover, they need to serve billions of users, who are unique at any point in time, making a complex user state space. Luckily, huge quantities of logged implicit feedback (e.g., user clicks, dwell time) are available for learning. Learning from the logged feedback is however subject to biases caused by only observing feedback on recommendations selected by the previous versions of the recommender. In this work, we present a general recipe of addressing such biases in a production top-K recommender system at YouTube, built with a policy-gradient-based algorithm, i.e. REINFORCE [48]. The contributions of the paper are: (1) scaling REINFORCE to a production recommender system with an action space on the orders of millions; (2) applying off-policy correction to address data biases in learning from logged feedback collected from multiple behavior policies; (3) proposing a novel top-K off-policy correction to account for our policy recommending multiple items at a time; (4) showcasing the value of exploration. We demonstrate the efficacy of our approaches through a series of simulations and multiple live experiments on YouTube.

Photos (4)