We're excited to announce our next meetup featuring Professor Jürgen Schmidhuber, one of the pioneers of Deep Learning Neural Networks and Director of the Swiss AI Lab IDSIA.
To guarantee a place register with Skills Matter as well: Click on the link to register. https://skillsmatter.com/meetups/7401-deep-learning-rnnaissance-and-impact-on-modern-life
Remember to use our hashtag: #DLLondon and follow us at @deeplearningldn
See you there!
Emilia & Ali
6:15 pm - Arrival and Networking
6:30-7:00 pm - Deep Learning: Impact on Modern Life by Jeremy Purches, Director of High Performance Computing and Deep Learning, NVIDIA
Jeremy will explain how graphical processing units (GPUs) enable various deep learning techniques. He will include use cases across a wide area of industry plus the latest news on NVIDIAs toolkits and software, including DIGITS, their open-source Deep Learning platform. Further information can be found here: https://developer.nvidia.com/deep-learning
Jeremy Purches is Director of High Performance Computing (HPC) and Deep Learning (DL) in the UK for NVIDIA. This role entails engaging with UK Research establishments, academia and industry to promote the use of Graphical Processing Units (GPUs) to accelerate HPC and DL applications. Prior to joining NVIDIA in 2012, Jeremy spent 12 years at Hewlett-Packard (technically managing the Airbus account and, latterly, Directing HPC for the Engineering sector across Europe) and, before that, worked in the Aerospace sector (MOD at Boscombe Down and then 20 years at Rolls-Royce aero-engines).
7:00-8:15 pm - How to Learn a Program by Prof. Jürgen Schmidhuber, IDSIA
Since 1987, we have published work on general problem solvers that search the space of programs running on general purpose computers with internal memory. Architectures include traditional computers, Turing machines, recurrent neural networks, fast weight networks, stack machines, and others. Some of our program searchers are asymptotically time-optimal. Some are self-referential and can even learn the learning algorithm itself (recursive self-improvement). Some reinforcement-learn without a teacher to solve very deep algorithmic problems involving billions of steps. And programs learned by our “deep learning” Long Short-Term Memory recurrent networks set the state-of-the-art in handwriting recognition, speech recognition, natural language processing, machine translation, image caption generation, etc. Google and others made them available to a billion users. I will briefly review deep supervised / unsupervised / reinforcement learning, and discuss the latest state of the art results in numerous applications.
Since age 15 or so, Prof. Jürgen Schmidhuber's main scientific ambition has been to build a self-improving Artificial Intelligence (AI) smarter than himself, then retire. He has pioneered self-improving general problem solvers since 1987, and Deep Learning Neural Networks (NNs) since 1991. The recurrent NNs (RNNs) developed by his research groups at the Swiss AI Lab IDSIA & USI & SUPSI and TU Munich were the first RNNs to win official international contests. They have revolutionised connected handwriting recognition, speech recognition, machine translation, optical character recognition, image caption generation, and are now in use at Google, Microsoft, IBM, Baidu, and many other companies. The first 4 members of DeepMind (sold to Google for over 600M) include 2 former PhD students from his lab. His team's Deep Learners were also the first to win object detection and image segmentation contests, and achieved the world's first superhuman visual classification results, winning nine international competitions in machine learning & pattern recognition (more than any other team). They also were the first to learn control policies directly from high-dimensional sensory input using reinforcement learning. His research group also established the field of mathematically rigorous universal AI and optimal universal problem solvers. His formal theory of creativity & curiosity & fun explains art, science, music, and humor. He also generalized algorithmic information theory and the many-worlds theory of physics, and introduced the concept of Low-Complexity Art, the information age's extreme form of minimal art. Since 2009 he has been member of the European Academy of Sciences and Arts. He has published 333 peer-reviewed papers, earned seven best paper/best video awards, the 2013 Helmholtz Award of the International Neural Networks Society, and the 2016 IEEE Neural Networks Pioneer Award. He is president of NNAISENSE, which aims at building the first practical general purpose AI.
8:15pm - Pizza, Drinks and Networking