Largely driven by Deep Learning, over the course of the last few years, we have become more adept at training our models to map from inputs to outputs with high precision. Our models break down, however, if the task or the data even slightly change and we are still at the beginning of learning how to transfer acquired knowledge. In this talk, I will give an overview of transfer learning and look into its applications and promises for NLP. I will touch on multi-task learning as well as selecting relevant data, among other things.
Bio: Sebastian is a 2nd year PhD Student in Natural Language Processing and Deep Learning at the Insight Research Centre for Data Analytics, Dublin, Ireland and a research scientist at Dublin-based text analytics startup AYLIEN. He previously studied Computational Linguistics at the University of Heidelberg, Germany and at Trinity College, Dublin. During his studies, he's worked with Microsoft, IBM's Extreme Blue, Google Summer of Code, and SAP, among others.