Past Meetup

Bayesian Optimization: From A/B to A-Z Testing & Artificial Intelligence Debate

This Meetup is past

146 people went

Location image of event venue

Details

Oracle Campus - Wednesday August 27, 2014 @ 6:00pm MDT

NOTE: For folks unable to attend in person register and we will email you a livestream link 2 hours prior to event.

Location: Oracle Campus, Bldg 1 - 500 Eldorado Blvd, Broomfield, CO 80021 Map: https://goo.gl/maps/KhWc6

Go to Bldg 1 and check in with security. You must present a "government ID" (drivers license, ID or passport).

This event allows no guests (each attendee must create own Meetup account). Anyone with Meetup account without first and last name that matches government ID needs to e-mail "[masked]" with that information. RSVP's closed after August 25.

Agenda:

6:00 - 6:20 Schmooze - Food shall be served in Lobby

6:20 - 6:30 Announcements

6:30 - 7:45 Bayesian Optimization: From A/B to A-Z Testing by Michael Mozer

7:45 - 8:30 Artificial Intelligence Debate

8:30 - 9:00 Networking

Bayesian Optimization: From A/B to A-Z Testing - Abstract

A/B testing is a traditional method of conducting a randomized controlled experiment to compare the effect of two treatments, A and B, on human subjects. For example, two alternative banner ads may be served to evaluate which is more effective in driving click throughs. A/B testing is used not only for marketing and web design but is the dominant paradigm in the experimental behavioral sciences used to understand human learning, reasoning, and decision making. Although the method can be extended to compare a handful of treatments, it does not solve the problem one often faces: searching over a large, possibly combinatorial or continuous space of alternatives to identify the treatment that achieves the best outcome. We describe a solution to this problem using Gaussian process surrogate-based optimization, a Bayesian method that relies on generative probabilistic models of human choice and judgment. Instead of assigning many human subjects to each of a few of treatments, the technique evaluates a few subjects on each of many treatments. The technique leverages structure in the space of treatments to infer the function that relates treatment to outcomes. We show the efficiency and accuracy of the technique on a range of problems, including: identifying preferred color combinations, maximizing charitable donations, and improving student learning of facts and concepts. This work is in collaboration with Robert Lindsey (University of Colorado) and Harold Pashler (UC San Diego).

Bio

Michael Mozer received a Ph.D. in Cognitive Science at the University of California at San Diego in 1987. Following a postdoctoral fellowship with Geoffrey Hinton at the University of Toronto, he joined the faculty at the University of Colorado at Boulder and is presently a Professor in the Department of Computer Science and the Institute of Cognitive Science. He is secretary of the Neural Information Processing Systems (NIPS) Foundation and has formerly served as NIPS Program Chair and General Chair, and as President of the Cognitive Science Society. He has over 120 publications in machine learning and cognitive modeling. His current research embeds machine learning techniques in software tools that improve how people learn, remember, and make decisions. He is also interested in applying machine learning prediction and control techniques to solve problems in engineering, including intelligent environments, speech and language recognition systems, and mining of customer behavior.

Artificial Intelligence Debate - Resolved: AI will be Human Friendly

Pro: Michael Walker. Brief Argument: AI will simply augment human capabilities and can be designed and programmed to be friendly to humans. Human authority and control over AI shall be a feature and any potential bugs fixed without significant damage. Realistic risk assessments required - AI cassandras are fear mongering and overstating real risks.

Con: Michael Malak. Brief Argument: AI will likely be uncontrollable by humans at a certain AI evolutionary point. Within a very short time AI will become smarter than humans. Significant risk that as AI gets smarter and smarter it will view humans as unnecessary and an obstacle to further AI evolution.