On Learning Form and Meaning in Neural Machine Translation Models


Details
https://a248.e.akamai.net/secure.meetupstatic.com/photos/event/5/d/a/b/600_450083979.jpeg
https://a248.e.akamai.net/secure.meetupstatic.com/photos/event/5/d/b/5/600_450083989.jpeg
Come join us for our Boston NLP Meet up!
Our special guest is Yonatan Belinkov (http://people.csail.mit.edu/belinkov/) from the MIT Computer Science and Artificial Intelligent Laboratory
*Pizza and Beer will be provided :)
On Learning Form and Meaning in Neural Machine Translation Models
Neural machine translation (MT) models obtain state-of-the-art performance while maintaining a simple, end-to-end architecture. However, interpreting such models remains challenging and not much is known about what they learn about source and target languages during the training process. In this talk, I will present our recent work on investigating what neural MT models learn about linguistic properties. We analyze the representations learned by neural MT models at various levels of granularity and empirically evaluate the quality of the representations through extrinsic classification tasks. First, we utilize part-of-speech and morphological tagging tasks to evaluate morphological learning in neural MT models. We conduct a thorough investigation along several parameters: word-based vs. character-based representations, depth of the encoding layer, the identity of the target language, and encoder vs. decoder representations. Second, we analyze what kind of semantic information is captured in neural MT models through a semantic tagging task. Our data-driven, quantitative evaluation sheds light on important aspects in the neural MT system and its ability to capture the form and meaning of words.
Bio
Yonatan Belinkov is a PhD candidate at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), working on speech and language processing. His recent research interests include vector representations of language in neural network models. His research has been published at ACL, EMNLP, TACL, and ICLR. He received an SM degree from MIT in 2014 and prior to that a BSc in Mathematics and an MA in Arabic Studies, both from Tel Aviv University.
https://secure.meetupstatic.com/photos/event/a/d/a/5/600_463244453.jpeg

Sponsors
On Learning Form and Meaning in Neural Machine Translation Models