Make it better! Hyperparameter Optimization

Location image of event venue

Details

Y-DATA Meetup #7
Make it better! Hyperparameter Optimization[masked]:00

Hosted by Taboola
Talks are in English
-------
Intro:

The improved performance of ML models is often the result of increased model complexity. This increase in complexity might, in turn, result either from architectural choices (changes to model topology/additional weights) or from an increasingly important aspect - growing number of hyperparameters. More hyperparameters lead to an ever growing dimensionality of the search space, making it difficult and time consuming to choose the optimal set of such parameters.
There are multiple types and approaches to hyperparameter optimization (HO):
Manual, Grid search, Random search, Bayesian optimization and evolutionary methods. During this meetup we will cover two of these methods:

Bayesian optimization and its applications in HO and a new physics-inspired method where hyperparameters are interpreted as controlling the level of correlated noise in the training.

More info about Y-DATA is here: bit.ly/ydata-website
Previous meetups videos are here: bit.ly/youtube-ydata
-----------
Agenda:

18:00 - 18:30 Registration, Mingling, Snacks & Beer

18:30 - 19:15
Physics-inspired training via the path in a hyperparameter space
Mykola Maksymenko (PhD), R&D Director at SoftServe

19:15 - 19:30 Break

19:30 - 20:15
Introduction to Bayesian Optimization
Nathaniel Bubis (M.Sc), Algorithms team lead at Healthy.io
-----------
Talk Details:

Talk #1: Physics-inspired training via the path in a hyperparameter space

Abstract:
Efficient search for optimal hyperpararneters is essential for training deep architectures and is a core part of any AutoML pipeline. Typical approaches, however, are far from being optimal and rely on repeated sampling of hyperparameter space or greedy search for the best set of hyperparameters.
In this approach, we train models in a combined weight-hyperparameter space resulting in optimal scheduling protocol (path) for hyperparameters. Our algorithm is based on the physical intuition of interpreting hyperpararneters as an effective temperature controlling noise in the system and requires only negligible computational cost in comparison to parallel grid approaches.
This leads to faster training times and improved resistance to overfitting and show a systematic decrease in the absolute validation error, improving over benchmark results.

Bio:
Mykola is R&D Director at SoftServe where his team focuses on deep technology development consulting in Artificial Intelligence, Human-Computing Interactions, Extended reality and Edge computing.
Mykola holds a PhD in Theoretical Physics and has more than 10 years of academic and industry research experience with previous research posts at Max Planck Society and Weizmann Institute of Science.

Talk #2: Introduction to Bayesian Optimization

Abstract:
Healthy.io aims to transform people's smartphone cameras into clinically approved medical devices. One of the main challenges in this task is finding the optimal thresholds for multiple independent classifiers and regressors based on both classical computer vision and deep learning. This talk will provide a brief introduction to the mathematical underpinnings of Bayesian Optimization, and discuss the use of Ax, a Bayesian Optimization framework recently released by Facebook.

Bio:
Nathaniel Bubis (M.Sc), leads the algorithm team at Healthy.io, and has previously led teams at Amazon's Lab126. He is especially interested in the use of Bayesian methodologies in machine learning.