Skip to content

Why You Should Care About Byte-Level Seq2Seq Models in NLP (Tom Kenter, Google)

Photo of
Hosted By
Marzieh S. and 4 others


Title: Why You Should Care About Byte-Level Sequence-to-Sequence Models in NLP

Abstract: Sequence-to-sequence models are ubiquitous in NLP nowadays. Usually, the text they deal with is treated as a sequence of words. While this proves to be a powerful way of dealing with textual data, word-level models do come with a set of problems. Byte-level seq2seq models are an attractive alternative as they provide elegant solutions to some of the problems word-level models have.
To make things more concrete, we will be focusing on the task of machine reading  --  the task where a computer reads a document and has to answer questions about it  -- throughout this talk.

Bio: Tom Kenter (Google UK, London) performs research on the topics of text-to-speech and natural language understanding. He received his PhD in 2017 from the Information and Language Processing Systems group at the University of Amsterdam, supervised by prof. dr. Maarten de Rijke. He has published at AAAI, ACL, CIKM and SIGIR.
The Alan Turing Institute
96 Euston Road · London
How to find us

Enter the British Library through the main entrance and follow signs to the Alan Turing Institute reception area.

Google map of the user's next upcoming event's location