9th Belgium NLP Meetup


Details
We know we've put your patience to the test, but we're finally ready for the first Belgium NLP Meetup of 2019. We've been invited by Foodpairing in Ghent, a young and promising company that will demonstrate what a perfect pair food and NLP can make. Mark Wednesday March 13th in your calendars if that sounds as good to you as it does to us. Doors open at 7pm, talks start around 7.30pm.
Here's our mouthwatering list of speakers:
Thomas Dehaene (Foodpairing NV): This is delicious; where's it from?
Foodpairing has various cool NLP use cases, one of which is the ability to classify a recipe into one or more regionalities. Thomas will give an overview of how the data science team processed and clustered their dataset, how the model gets trained using Sklearn or Keras, and how the whole shebang gets deployed using Google Cloud Platform.
-----
Lieve Macken (LT3, UGent): Translating in the digital age: challenges and opportunities
Neural Machine Translation (NMT) has become the mainstream in MT technology, mainly due to its ability to produce far better translations than its predecessor, statistical machine translation (SMT). Numerous studies covering many language pairs and translation tasks demonstrated that NMT outperforms SMT. Since NMT systems are able to take into account the context of the entire sentence during translation, they can produce more fluent translations. On the other hand, the NMT output contains less transparent errors, such as omissions, which might be quite challenging for post-editors.
In this talk Lieve will present recent work carried out in collaboration with Joke Daems in which they compared the interventions made on human-translated and neural machine-translated texts when translators assumed they were revising a human-translated text and when they assumed they were post-editing a machine-translated text, whereas, in reality, they could be 'post-editing' a human-translated text or 'revising' a machine-translated text.
-----
Thomas Demeester (UGent, imec): Help, my neural NLP model is too big!
Models consisting entirely of deep neural networks, and trained on large datasets, hold the state-of-the-art in many NLP tasks these days. Such models can become very large in terms of the number of parameters, especially when large vocabularies are involved. Yet, smaller models may be more attractive, for examples for deployment on small hand-held devices, or simply due to limitations in small-scale businesses’ computational resources.
Thomas will introduce the concept of ‘predefined sparseness’, how neural sequence models can be made sparse even before training, leading to models with similar expressiveness but which are potentially much smaller than standard dense models.

9th Belgium NLP Meetup