There is a blossoming of search and recommendation technology usage in Melbourne, but a definite lack of an unofficial, relaxed gathering to allow some fruitful information and experience exchange.
Our aim is to make the Meetup more welcoming to all search and recommendation developers, data scientists and researchers in Melbourne who are facing similar challenges -- while maintaining the level of technical depth that we've come to enjoy.
Join us for pizza, beer, and discussion about search tech!
Please feel free to suggest quick (15 minute) presentations - whether a problem you've solved, a problem you need help solving or a general interesting experience of using search and recommendation technologies.
Title: You Can’t Improve What You Don’t Measure: How to Design A/B Online Metrics
Speaker: Dr. Widad Machmouchi
Abstract: Online experimentation is becoming more and more popular. Controlled experiments with thousands or even millions of users are applied to establish causal relationships between a new treatment and a change in user behavior. Such A/B experimentation is used widely now in industries related to social media, e-commerce, online publishing, search engines, etc., which try to optimize for engagement, revenue, user success, among other aspects. One of the key factors in evaluating online controlled experiments are metrics. They help discern whether the treatment effect on users was desired or not and therefore guide ship decisions of the teams building the new treatments. For that reason, good A/B metrics are of critical importance in order to make sound data-driven decisions. Yet, it is very easy to build A/B metrics that suffer from undetected weaknesses and which eventually point in the wrong direction leading to – unknowingly – incorrect ship decisions. Therefore, great care has to be devoted to proper design of A/B metrics that are expressive, robust, and trustworthy. In this talk, we discuss a few important lessons learnt while designing A/B ship metrics for the large search engine Bing.com. Based on many years of experience, we elaborate on a number of important aspects that should be taken into account when designing online A/B metrics, specifically having user satisfaction metrics in mind.
Bio: Widad Machmouchi is a Principal Applied Researcher at Bing, Microsoft where she works in the AI & Research group focusing on user modeling, A/B experimentation and success measurement. She is part of the Metrics team, where online metrics are developed to measure user satisfaction on Bing and are used as the main overall evaluation criteria (OEC) in almost all A/B experiments at Bing. She holds a PhD in Theoretical Computer Science from the University of Washington, Seattle and is a co-founder of a technology hardware start-up.