This meetup is kindly hosted and sponsored by Microsoft in their London office. There will be two speakers, both from Microsoft Research NYC.
Please note a final attendee list needs to be passed to Microsoft in advance so keep your RSVP up to date.
See you there!
The London ML team.
• A combinatorial prediction market for the U.S. Elections - Miroslav Dudik
Prediction markets are emerging as a powerful and accurate method of aggregating information from populations of experts (and non-experts). Traders in prediction markets are incentivized to reveal their information through buying and selling "securities" for events such as "Hillary Clinton will win the U.S. Presidency in 2016". The prices of securities reflect the aggregate belief about the events and the key challenge is to correctly price the securities.
I will present an algorithm for pricing multiple logically interrelated securities. Our approach lies somewhere between the industry standard---treating related securities as independent and thus not transmitting any information from one security to another---and a full combinatorial market maker for which pricing is computationally intractable. Our techniques borrow heavily from variational inference in exponential families. We prove several favorable properties of our scheme and evaluate its information aggregation performance on survey data involving hundreds of thousands of complex predictions about the 2008 U.S. presidential election. I will also give an example of the real-world deployment of our pricing scheme in the 2012 U.S. presidential elections where we modeled probabilities of 10^33 related events and updated them in real time.
Joint work with Sebastien Lahaie, David Pennock and David Rothschild.
Bio: Miroslav Dudik is a senior researcher at Microsoft Research NYC. His interests are in combining theoretical and applied aspects of machine learning, statistics, convex optimization and algorithms. He received his PhD from Princeton in 2007.
• Interactive Machine Learning with Contextual Bandits - Alekh Agarwal
The increasing success of Machine Learning is resulting in learned models being deployed in the front-ends of a variety of applications. An inevitable question is: How can I use the data gathered using my current model to learn an even better model? Unfortunately, traditional machine learning is ill-equipped to answer such _counterfactual_ questions.
In this talk, I will describe how standard machine learning algorithms fail to produce correct predictions for counterfactual questions. I will then present two recipes to counter this problem. The first solution addresses scenarios where we have offline access to the data from a randomized experiment, and want to predict reliably how well a new model will do online. The second scenario aims to adaptively tune randomization so as to explore no more than necessary to guarantee good online performance. While the first scenario is often practically convenient, the second one truly unlocks the power of online learning, improving the deployed models on the fly in a principled manner.
Bio: Alekh Agarwal is a researcher at the New York lab of Microsoft Research, where his research is primarily focused on machine learning, statistics and convex optimization. Prior to that, he obtained his PhD from UC Berkeley under the supervision of Peter Bartlett and Martin Wainwright. He received the MSR PhD Fellowship in 2009 and Google PhD Fellowship in 2011.