Deep Learning Meetup #15 at Palais des Congrès

This is a past event

253 people went

Le Palais des Congres de Paris

2 place Porte Maillot · 75017 Paris

How to find us

salle Passy du niveau 1 du palais

Location image of event venue


Dear Deeplearners,

On November 06 will take place our 15th Deep Learning Meetup, organised by Heuritech in collaboration with Microsoft. It will take place at Palais des Congrès, after Microsoft Experience day 1.

We'll have 3 great speakers, starting with Andrew Fitzgibbon (Microsoft), Léonard Blier (FAIR / TAU INRIA) and Pierre Stefani (Photobox), followed by informal discussion, food & drinks.

** VERY IMPORTANT ** for security reasons at Palais des Congrès the admission is strictly limited to people subscribed to the event Microsoft Experiences. It is absolutely free and you may register there just for the meetup: . Moreover, the gates will close at 18:30 strictly so please come before. If you realise you won't be able to join, please RSVP as no on Meetup.


Andrew Fitzgibbon (Microsoft)
I have been lucky enough to have been involved in the development of real-world computer vision systems for over twenty years. In 1999, prize-winning research from Oxford University was spun out to become the Emmy-award-winning camera tracker “boujou”, which has been used to insert computer graphics into live-action footage in pretty much every movie made since its release, from the “Harry Potter” series to “Bridget Jones’s Diary”. In 2007, I was part of the team that delivered human body tracking in Kinect for Xbox360, and in 2015 I moved from Microsoft Research to the Windows division to work on Microsoft’s HoloLens, an AR headset brimming with cutting-edge computer vision technology. In all of these projects, the academic state of the art has had to be leapfrogged in accuracy and efficiency, sometimes by several orders of magnitude. Sometimes that’s just raw engineering, sometimes it means completely new ways of looking at the research. If I had to nominate one key to success, it’s a focus on, well, everything: from low-level coding to algorithms to user interface design, and on always being willing to change one’s mind.

Andrew is a scientist with HoloLens at Microsoft, Cambridge. He is best known for his work on 3D vision, computer vision, graphics, machine learning, and a little neuroscience. He has published numerous highly-cited papers, and received many awards for his work, including ten “best paper” prizes at various venues, the Silver medal of the Royal Academy of Engineering, and the BCS Roger Needham award.


Léonard Blier, (FAIR Paris / TAU INRIA) - Do Deep Learning Models have too many parameters?"

"The best model is the simplest model which can explain the data", says Occam's razor principle, which describes how we are doing inductive reasoning and generalization, both in empirical science and in our everyday life. This general principle can be formalized with tools from information theory and compression: the best model is the model which can compress (losslessly) the data the most when taking into account the cost of encoding the model itself.

When it comes to Deep Learning, there seems to be a paradox: in practice, the best models are often huge, with a lot of parameters, so extremely expensive to encode.

We will introduce the information theory viewpoint in machine learning and deep learning, and show that despite their huge number of parameters, deep learning models are not "complex".


Pierre Stefani (Photobox) - Landmark recognition in photos
Where did I took this photo ? What was the name of that castle again ? Recognizing landmarks and touristic sites in pictures is not a straightforward task: we’ll show the main challenges that we faced to tackle this problem, available resources, including the recent Google Landmarks Dataset and their Deep Attentive Local Features. Since off-the shelf solutions trained on public data do not perform as well for Photobox usecases, we will present our adapted solution that combines fine-tuned CNNs with our own datasets, an attention model and locally aggregated vectors (VLAD).

Looking forward to see you all