Neural Networks for Natural Language Processing


Details
IMPORTANT: location update: same old same old :-)
We are really honored to have Tomáš Mikolov as our next speaker. Tomáš has done groundbreaking work in NLP at Google Brain and now at Facebook AI Research.
Neural Networks for Natural Language Processing
Abstract: Artificial neural networks are currently very successful in various machine learning tasks that involve natural language. In this talk, I will describe recurrent neural network language models, as well as their most frequent applications to speech recognition and machine translation. I will also talk about distributed word representations, their interesting properties, and efficient ways how to compute them and use in tasks such as text classification. Finally, I will describe our latest efforts to create a novel dataset that could be used to develop machines that can truly communicate with human users in natural language.
Bio: Tomáš Mikolov is a research scientist at Facebook AI Research group since 2014. Previously he has been member of the Google Brain team, where he developed and implemented efficient algorithms for computing distributed representations of words (the word2vec project). He obtained his PhD from the Brno University of Technology (Czech Republic) in 2012 for his work on recurrent neural network-based language models (RNNLM). His long term research goal is to develop intelligent machines that people can communicate with and use to accomplish complex tasks.

Neural Networks for Natural Language Processing