Data Science Classroom: Ensemble Methods


Details
For our December Meetup, we're thrilled to bring you the next in our occasional "Data Science Classroom" series, about foundational topics in statistics and machine learning. This time, we have Jay Hyer, GMU graduate student, introducing ensemble learning (http://en.wikipedia.org/wiki/Ensemble_learning), an important set of techniques that combine the results from a large number of learners to get better predictions.
NOTE: This event will be at a different venue -- the gorgeous offices of Gallup, in an old Masonic temple, conveniently located between the Gallery Place and Metro Center metro stations.
Agenda:
6:30pm -- Networking, Empenadas, and Refreshments
7:00pm -- Introduction
7:15pm -- Presentations and discussion
8:30pm -- Adjourn for Data Drinks (Fado, 808 7th St. (http://www.fadoirishpub.com/washington/home))
Abstract: This presentation will review how ensemble learning differs from more traditional machine learning techniques in its approach to modeling. The discussion will cover three popular methods: boosting, bagging and stacking. The talk will finish with an overview of modern advancements and applications of ensemble methods.
Bio: Jay K. Hyer (http://www.linkedin.com/in/jayhyer) is currently pursuing a PhD in Computational Science and Informatics at George Mason University, where he also earned his MS in Statistics in 2009. Jay works as a data scientist for a DC based research, technology and consulting firm where he contributes to the development and delivery of technology solutions for institutes of higher education. Follow Jay on Twitter at @aDataHead (http://twitter.com/aDataHead).
Sponsors:
This event is sponsored by Intridea (http://www.intridea.com/), Cloudera (http://www.cloudera.com/), Statistics.com (http://bit.ly/12YljkP), Elder Research (http://datamininglab.com/), and MemSQL (http://memsql.com).

Data Science Classroom: Ensemble Methods