The Deep Learning meetup is coming back on 30/01. For security reasons you will need to subscribe to the eventbrite event, and bring your ID as it will be checked beforehand: https://www.eventbrite.co.uk/e/deep-learning-paris-meetup-16-at-samsung-le-centorial-tickets-55025027338
Only subscriptions on Eventbrite will be taken into account.
We'll have 3 great speakers:
Thomas Wolf, Chief Science Officer R&D at Hugging Face
Transfer Learning for Natural Language Generation – The Case of Open-Domain Dialog
Free-form dialogue systems ("chatbots") are agents that are designed to interact with humans in open conversations. Developing these systems tackles the general research question of how a model can generate a coherent text output from a textual input, in particular over a wide range of topics and in a stochastic environment. These dialog agents are thus test-beds for many interactive AI systems but as of today, building such intelligent conversational agents remains an unsolved problem. In this talk, I will present a comparison and the technical details behind the two winning approaches of the Conversational Intelligence Challenge 2 held at NeurIPS 2018 last month. The first (our) approach won the automatic evaluation track while the second approach won the human evaluation track. These two approach's bears interesting similarities, being both based on a transfer learning scheme and more precisely on the very same pre-trained model, but also showcase interesting and complementary differences in the implementation of the adaptation phase with differing fine-tuning datasets, multi-task objectives and architectural adaptations.
Mathieu Poumeyrol, Senior R&D at Snips
Tract: running deep models on the edge
Tract is Snips' Open Source Neural Network inference library.
It can run the "Hey Snips" wakeword model or Google Inception
on a 5$ chip.
Alexandre Ramé, Lead R&D at Heuritech
OMNIA Faster RCNN (https://arxiv.org/abs/1812.02611) Object detectors tend to perform poorly in new or open domains, and require exhaustive yet costly annotations from fully labeled datasets. We aim at benefiting from several datasets with different categories but without additional labelling, not only to increase the number of categories detected, but also to take advantage from transfer learning and to enhance domain independence.
More details about the talks later. Looking forward to seeing you all there !