Past Meetup

Feeling a Multi-armed Embrace: Lessons from Online Content Optimization

This Meetup is past

270 people went

Location image of event venue


For our August Data Science DC Meetup, we're thrilled to have Brian Muller from OpBandit (recently acquired by Vox) presenting about the algorithms of Multi-Armed Bandit models and their importance in web content optimization. Going beyond simple A/B testing, Bandit approaches continuously exploit the knowledge they have learned about user preferences, maximizing value as they learn. As a short postscript, Harlan Harris will talk about Bandit modeling when preference data is incomplete and delayed. Expect to learn about tradeoffs between exploration and exploitation, algorithms for continuous learning and optimization, and how much of a difference they can make in practice.


• 6:30pm -- Networking, Empanadas, and Refreshments

• 7:00pm -- Introduction, Announcements

• 7:15pm -- Presentations and Discussion

• 8:30pm -- Data Drinks (Tonic, 2036 G St NW)



At OpBandit, we built a service that renders different versions of news content for rendering on top publishers across six countries. At a high level, whenever a reader requests a page on a publisher's website, our service selects from multiple versions of headlines and photos to deliver the collection of versions that we think a user is most likely to click. This requires decision making on the fly for each request, with hard requirements for speed, reliability, and selection quality. This talk will cover the technical approaches we used (mostly solutions to the multi-armed bandit problem) to make content version selections, as well as the product implications of our choices and the results of each approach.

Brian Muller ( is the Director of Data Science at Vox Media ( Previously, he was the CTO and co-founder of OpBandit ( (acquired by Vox), a content optimization tool for online publishers. Prior to founding OpBandit, he was the Lead Data Scientist at LivingSocial. While at LivingSocial, he founded the data science team and oversaw the creation and growth of a big data infrastructure and the teams necessary to support it - all while the customer base grew from thousands to over 70 million users. Before that, he worked as the Web Director for Foreign Policy Magazine under the Washington Post. He as a MS in the Biomedical Sciences, and has spent time in academia working for the Medical University of South Carolina and Johns Hopkins University School of Medicine focused on squeezing meaningful information out of vast quantities of genomic data. Follow Brian on Twitter @bmuller (


Traditional multi-armed bandit optimization relies on your getting feedback relatively quickly so that you can update your utility function with improved estimates of the value of each option. This talk will explore options for tackling the problem of bandit-like optimization when feedback is slow and incomplete, such as in direct marketing campaigns where purchases may be delayed weeks or months. How do you update your utility function when "no" might mean "not yet" and product demand changes over time?

Harlan D. Harris ( is Director, Data Science at EAB (, a company that provides enterprise software and best-practices research to higher education institutions. He is a co-founder of the Data Science DC Meetup and of Data Community DC, Inc. Harlan has a BS in Computer Science from the University of Wisconsin-Madison, and a PhD in Computer Science, focusing on Machine Learning and Cognitive Science, from the University of Illinois at Urbana-Champaign. He worked as a researcher at Columbia University, the University of Connecticut, and New York University psychology departments before turning to data science and predictive analytics in industry. Harlan is co-author of O'Reilly's Analyzing the Analyzers, and tweets about data science as @harlanh (


This event is sponsored by the GWU Department of Decision Sciences (, (, Elder Research (, Novetta Solutions (, and Booz Allen Hamilton ( (Would your organization like to sponsor too? Please get in touch!)