Augmentation instead of Regularization & Autonomous Driving

RSVPs are closed

Claim your spot at 7:00 PM on 6/3.

272 spots left.

Share:
Location image of event venue

Details

Talk 1: Data augmentation instead of explicit regularization

Speaker: Alex Hernández-García

Abstract: Explicit regularization techniques, such as weight decay and dropout, are the standard and most popular ways of improving the generalization of CNNs. However, these techniques blindly reduce the effective capacity of the model and, importantly, have very sensitive hyper-parameters that require specific fine-tuning. Furthermore, they are used, unquestioned, in combination with other techniques from the "machine learning toolbox", such as SGD, normalization, convolutional layers or data augmentation, which also provide implicit regularization. Little is known about the interactions among these techniques. In this talk, I will present the results of systematically contrasting data augmentation and explicit regularization on different architectures and object recognition data sets. Data augmentation, unlike explicit regularization, does not reduce the capacity of the model and does not require fine-tuning of hyper-parameters. Besides, we have recently shown that models trained with heavier data augmentation learn more similar representations to those measured in the human visual cortex. In sum, I will show how replacing weight decay and dropout by data augmentation can safely free us from the hassle of fine-tuning sensitive hyper-parameters, potentially achieve better performance and learn more biologically plausible representations.

Bio: Alex Hernández-García is a last-year PhD candidate at the Institute of Cognitive Science of the University of Osnabrück. After completing his M.Sc. at the University Carlos III of Madrid, Spain, he moved in 2016 to Berlin to start a PhD on biologically-inspired machine learning, with a Marie Sklodowska-Curie ITN grant. Although his main background is on machine learning and computer vision, he has an interdisciplinary profile and interests in other fields such as computational neuroscience as reflected by his internships at the Spinoza Centre for Neuroimaging in Amsterdam and the Cognition and Brain Sciences Unit of the University of Cambridge. His paper "Further advantages of data augmentation on convolutional neural networks" recently won the Best Paper Award at the International Conference on Artificial Neural Networks, ICANN.

-

Talk 2: Tackling autonomous driving with a single neural network

Speaker: Markus Hinsche

Abstract: Deep networks can be trained on demonstrations of human driving to learn to follow roads and avoid obstacles. This is possible with a single end-to-end network learning all the parts of the driving at once. I will give a short introduction to autonomous driving stacks and guide you through the implemention of this network which was introduced in the paper "End-to-end Driving via Conditional Imitation Learning". We open-sourced our implementation (https://github.com/merantix/imitation-learning) and wrote a Medium post (https://medium.com/merantix/journey-from-academic-paper-to-industry-usage-cf57fe598f31).

Bio: Markus Hinsche is a Software Engineer working on Machine Learning at Merantix and one of the rare breed of people actually from the Berlin area. He is eager to explore new topics every day. To satisfy this hunger for the unknown, Markus worked at various startups after receiving his Master's degree in IT Systems Engineering at Hasso Plattner Institute in Potsdam.