"Meta-Learning" and "Kernel Methods for Comparing Distribution"


Bangkok Machine Learning meetup is back!

This time we will learn about two interesting topics in machine learning from two very special guests, Sam Witterveen and Wittawat Jitkrittum.

Sam is a Google Developer Expert in Machine Learning. He regularly shares his knowledge at events and trainings across Asia and is co-organiser of the Singapore TensorFlow and Deep Learning group

Wittawat is a postdoctoral researcher at Empirical Inference Department, Max Planck Institute for Intelligent Systems. One of his research papers has been selected for one of the three Best Paper Awards at NIPS 2017.

Venue: Room 206, 2nd Floor, 50 Years Anusorn Building (BBA Building), Chulalongkorn Business School (https://goo.gl/maps/httcx4S62642)


6:00 PM - 6:20 PM | Registration

6:20PM - 6:50 PM | Sam Witteveen, Google Developer Expert in Machine Learning
Introduction to Meta-Learning: the future of Machine Learning

7:00 PM - 8:10 PM | Wittawat Jitkrittum, Max Planck Institute for Intelligent Systems
Introduction to Kernel Methods for Comparing Distributions

8:10 PM - 8:30 PM | Q&A and Networking


Introduction to Meta-Learning: the future of Machine Learning

Presented by: Sam Witteveen (https://www.linkedin.com/in/samwitteveen/)

Talk Abstract:

Sam will be looking at the developments in the space of ML that creates other ML models, which were featured in the AutoML and Learning to Learn papers. These techniques have produced NASnet which is one of the best models for ImageNet to-date.


Introduction to Kernel Methods for Comparing Distributions

Presented by: Wittawat Jitkrittum (http://wittawat.com)

Talk Abstract:

Kernel methods provide a theoretically grounded framework for constructing nonlinear learning algorithms from linear ones. A well-known example is the support vector machine, a classifier whose decision boundary is made nonlinear by a kernel. A perhaps less known use of kernel methods is for comparing probability distributions. This is the main topic of this talk.

In this tutorial, I will introduce the kernel mean embedding, a powerful technique for representing probability distributions as points in a high-dimensional (possibly infinite-dimensional) Hilbert space. Distributions embedded as points can then be manipulated using operations of the Hilbert space. A useful operation is measuring the distance between two distributions. The distance as measured in the embedded space is known as the Maximum Mean Discrepancy (MMD). A practical application of MMD is in two-sample testing, where we are given two collections of samples, and the goal is to determine whether they follow the same distribution. The two-sample test with MMD as the test statistic can be seen as a "kernelized" (nonlinear) version of the well-known t-test.

Firstly I will motivate the importance of two-sample testing, and introduce the MMD and the theory behind it. Next I will describe the statistical test using MMD as the test statistic, and its applications. Finally I will demonstrate how to use MMD as the objective for training a deep generative model for simple image generation. Necessary background materials will be reviewed at the beginning of the talk.

Prerequisite: basic knowledge of statistics, basic knowledge of linear algebra


James -[masked]
Arm -[masked]

The talk will be in English.