Deep holistic image understanding and Understanding Recurrent Nets

This is a past event

120 people went

Location image of event venue


Proud to have Bernardino Romera from the Oxford Torr vision group join us and talk about his work on CRFs and RNNs. Many thanks to G-Research for sponsoring this event.

Update: We found out that Stanford PhD student Andrej Karpathy was in town for an internship at DeepMind and he has kindly agreed to deliver a talk as well.

Note: we need to provide a list of attendee names to the venue beforehand for security purposes so please keep your RSVP and user name up to date.

The agenda is:

18:00 - Arrive for drinks & networking

18:30 - Talk Bernardino

19:30 - Break

20:00 - Talk Andrej

21:00 - Close

See you there!

The London ML Team.

• Deep holistic image understanding

Image understanding involves not only object recognition, but also object delineation. This shape recovery task is challenging because of two reasons. First, the necessity of learning a good representation of the visual inputs. Second, the need to account for contextual information across the image, such as edges and appearance consistency. Deep convolutional neural networks are successful at the former, but have limited capacity to delineate visual objects. I will present a framework that extends the capabilities of deep learning techniques to tackle this scenario, obtaining cutting edge results in semantic segmentation (i.e. detecting and delineating objects), and depth estimation. A live demo of this work can be found at

Bio: Bernardino is a postdoc in Torr Vision Group at University of Oxford. He received his PhD degree from University College London in 2014, supervised by Prof. Massimiliano Pontil and Prof. Nadia Berthouze. He has published in top-tier machine learning conferences such as NIPS, ICML and AISTATS, receiving several awards such as the Best Paper Runner-up Prize at ICML 2013, and the Best Paper Award at ACII 2013. During his PhD he interned at Microsoft Research, Redmond.

His research focuses on multitask and transfer learning methods applied to computer vision tasks such as object recognition and segmentation, and emotion recognition.

• Visualizing and Understanding Recurrent Networks - Andrej Karpathy (

Recurrent Neural Networks are enjoying renewed interest as a result of successful applications in a wide range of machine learning problems that involve sequential data. In this talk I will summarize my own experience with training these models for automated image captioning and for generating text character by character, with a particular focus on understanding the source of their impressive performance and their limitations.

Bio: Andrej Karpathy is a 5th year PhD student at Stanford University, working with Fei-Fei Li. His focus is on Deep Learning, with applications in Computer Vision, Natural Language Processing, and their intersection. He is visiting London during the summer as an intern at DeepMind.


At G-Research, we research investment ideas to predict returns in financial markets across multiple asset classes. We also develop the research and execution platform to deploy these ideas in markets globally.

The task of our Machine Learning group is to use state of the art machine learning techniques to generate financial forecasts from large, noisy and rapidly changing data sets. Making forecasts is core to our business; the Machine Learning team have a measurable impact on our business. Besides the extensive data they already have in house, they have the resources to purchase and clean new datasets if they have ideas which require something different; they are limited only by their imagination.

Once a researcher gains sufficient experience they will have the freedom to set their own research agenda and explore any ideas they believe will give the best results. We operate in a highly competitive field and in order to stay ahead we need to be using the latest machine learning research and developing novel ideas.

We are currently recruiting for our Machine Learning Tea, - if you are interested in learning more then please email Dani McColl – [masked]