We have two fantastic talks this month!
Train in sim, drive in real with Donkeycar and CycleGAN
Abstract: Donkeycar is a DIY autonomous car. I have had a bit of luck with training the car using a simulation then, using CycleGAN make the data "look real". Then deploying a model trained on "fake real" data to the real world.
Speaker: Josh just graduated Robotics/Mechatronics at Swinburne. He has worked at Silverpond for a bit over 2 years and is involved in the Autonomous Robocar track days with Andy G. He would like to get something working, before figuring out why it works at all.
Faster Deep Learning Training with Automatic Mixed Precision
Abstract: Automatic Mixed Precision is an easy to use method for improving training performance up to 3x by utilizing NVIDIA Tensor Cores. We will review the theory behind mixed precision using Tensor Cores, show how to use it in model training scripts, and highlight some real world performance improvements observed by customers. Compared to single precision, mixed precision offers many benefits: 2x better use of the available DRAM bandwidth, smaller memory footprints which allow larger batch sizes or network architectures with more parameters to fit in GPU memory, and allow usage of Volta's Tensor Cores to boost raw math throughput by up to 8x.
Bio: Maggie Zhang joined NVIDIA in 2017 and she is currently working on deep learning frameworks. She got her PhD in Computer Science & Engineering from the University of New South Wales in 2013. Her research background includes GPU/CPU heterogeneous computing, compiler optimization, computer architecture, and deep learning.