We are excited to announce our next meetup with self-driving cars on the 20th March 2018 in the Isarpost event location close to Sendlinger Tor! We will cover topics like SLAM for self driving cars, sensor fusion, and of course deep learning. The talks range from industry applied techniques to scientific work and we are proud to have an exquisite lineup of speakers from Lyft, Intel Labs, and TUM.
Lyft opened a brand new office in Munich that focuses on self-driving car technology. They are sponsoring this event with location, yummy food & drinks and - most importantly - presentations about ongoing work in the industry.
6:00pm - Introduction
6:10pm - Talks
SLAM for Self Driving Cars - Holger Rapp/Wolfgang Hess
Computer Vision and Perception for Autonomous Vehicles - Matt Vitelli
7:00pm - Break
7:30pm - Talks
Deep Learning for Urban Driving - Alexey Dosovitskiy
3D Computer Vision for Self-Driving Cars - Daniel Cremers
We look forward to seeing you there,
Your Meetup Team
SLAM for Self Driving Cars
Holger Rapp / Wolfgang Hess
Knowledge about the world is important to enable autonomous driving. Simultaneous localization and mapping (SLAM) is a technology which can help provide highly detailed maps and accurate localization for AV cars, even in GPS deprived surroundings like cities. This talk describes the work the Lyft Germany team does in this domain.
We provide an overview of our topics and an outlook to future work. We then deep dive into the fast loop closing algorithm described in the Cartographer paper and how it was generalized to 3D.
Computer Vision and Perception for Autonomous Vehicles
In this talk, we will discuss some of the current state-of-the-art techniques for processing high-resolution camera imagery and LiDAR point clouds using a combination of classical computer vision techniques, deep learning models, and other heuristics. We will focus on some of the problems that Lyft's Level 5 team has tackled to date, including training an end-to-end network for predicting steering angle from camera data, bounding box detection from high-resolution imagery, LiDAR point cloud segmentation, and fusing this data together.
Deep Learning for Urban Driving
Applications of deep learning to autonomous driving are complicated both by logistical and algorithmic difficulties. I will talk about our work on both these aspects. On the logistic side, we created CARLA, a high-fidelity open urban driving simulator, which we believe will greatly democratize the research on autonomous driving. On the algorithmic side, we are working on adapting deep-learning-based methods to the challenging scenario of urban driving. I will talk about our recent work in this direction and the recent efforts on transferring the policies learned in simulation to the real world.
3D Computer Vision for Self-Driving Cars
Over the last years, the field of Computer Vision has matured from a fairly small niche area in computer science to one of the hottest topics in technological development today. In my presentation, I will sketch a number of recent developments in 3D computer vision and in particular in reconstruction from moving cameras. These methods for Simultaneous Localization and Mapping (SLAM) can accurately localize the camera and recover the observed 3D world. They will form a central building block of future self-driving cars.
Wolfgang Hess got his doctorate degree in nonlinear optimization at the Technische Universität Darmstadt in 2010. Before joining Lyft he worked at Google’s Munich office, most recently on LIDAR based SLAM for indoor mapping and mobile robots. He is one of the four software engineers who started Lyft’s Munich office.
Holger Rapp studied Physics at the University of Heidelberg. He did his PhD in mechanical engineering at the Karlsruhe Institute of Technology as a stipendiary of the KSOP (Karlsruhe School of Optics and Photonics) in 2012. After his PhD he worked at Google Germany and became a founding member of the Cartographer project. In 2017 he left Google to start the German Lyft office. He has published about about various topics, including self driving cars, optical inspection, real time control of industrial robots, image processing, time of flight camera systems and rapid prototyping.
Matt Vitelli studied Computer Science at Stanford University. He received his MSc in Artificial Intelligence with a focus on Computer Vision. Before Lyft, he worked at Oculus Research developing algorithms for 3D tracking. At Lyft, he works on perception and motion planning for self-driving cars. He is one of the founding software engineers at Level 5.
Alexey Dosovitskiy received MSc and PhD degrees in mathematics from the Department of Mechanics and Mathematics of Moscow State University in 2009 and 2012 respectively. He spent[masked] as a postdoctoral researcher at the Computer Vision Group of Prof. Thomas Brox at the University of Freiburg in Germany, working mainly on deep learning, in particular unsupervised learning, image generation with neural networks, motion and 3D structure estimation. Since May 2016 Alexey works on deep learning and sensorimotor control at Intel Visual Computing Lab led by Dr. Vladlen Koltun.
Daniel Cremers obtained a PhD in Computer Science from the University of Mannheim, Germany. Subsequently he spent two years as a postdoctoral researcher at UCLA and one year as a permanent researcher at Siemens Corporate Research in Princeton, NJ. From 2005 until 2009 he was associate professor at the University of Bonn, Germany. Since 2009 he holds the chair for Computer Vision and Pattern Recognition at the Technical University, Munich. His publications received numerous awards, most recently the SGP 2016 Best Paper Award and the CVPR 2016 Best Paper Honorable Mention. Prof. Cremers received three ERC Grants and the Gottfried-Wilhelm Leibniz Award 2016. He is cofounder of several companies, in particular of Artisense Corporation, a hightec startup focused on technologies for self-driving cars.