Paris NLP Season 4 Meetup #1

Cet événement est passé

331 y sont allés

Tous les 4 mercredi du mois

Image du lieu de l'événement

Détails

Seating is on a first come, first served basis, even if you are on the waiting list, so we suggest arriving early. Algolia can host up to 100 guests.

La salle permet d'accueillir 100 personnes. L'inscription est obligatoire mais ne garantit pas que vous pourrez entrer, nous vous recommandons donc d'arriver un peu en avance.

----------
Speakers:
----------

• Florian Strub, Research Scientist @ DeepMind
Multimodal learning

Description:
While our representation of the world is shaped by our perceptions, our languages, and our inter-actions, they have traditionally been distinct fields of study in machine learning. Fortunately, this partitioning started opening up with the recent advents of deep learning methods, which standardized raw feature extraction across communities. However, multimodal neural architectures are still at their beginning.
In this presentation, we will focuses on visually grounded language learning for three reasons (i) they are both well-studied modalities across different scientific fields (ii) it builds upon deep learning breakthroughs in natural language processing and computer vision (ii) the interplay between language and vision has been acknowledged in cognitive science.

This presentation will be divided into two parts:
As a first step, we will motive our line of research by speaking about the language grounding problem. (5-7min)
Then, we will introduce some fundamental visually grounding tasks that have been explored in the past 3 years. (2-3min)
Finally, we will focus on a specific kind of multimodal architecture, namely, Modulation Layers (i.e., Conditional Batch Norm and FiLM). (10-12min)

Materials:
- http://papers.nips.cc/paper/7237-modulating-early-visual-processing-by-language
- https://distill.pub/2018/feature-wise-transformations/

----------

• Felix Le Chevallier, Lead Data Scientist @ Lifen

Hacking Interoperability in Healthcare with AI: Structuring Medical Data to digitize medical communications

How we scaled from 0 to 100k daily predictions served to healthcare practitioners to help them communicate more efficiently, from simple heuristics with handcrafted rules and only a couple of clients, to classical machine learning, and then RNNs to structure information in free form medical notes.

----------

• Janna Lipenkova, Founder @ Anacode

Applications in data and text analytics often have an ontology as their conceptual backbone - that is, a hierarchical representation of the underlying knowledge domain. However, such representations are tedious to construct, maintain and customize in a manual fashion. In this talk, I will show how text data and lexical relations such as hypernymy, synonymy and meronymy can be leveraged to automatically construct ontologies. After a review of different unsupervised and distant-supervised methods proposed for lexical relation extraction from text, I will explain Anacode's approach to building and maintaining large-scale, multilingual ontologies for the domain of business and market intelligence.