Arun Raja on BERT: Pre-training of Deep Bidirectional Transformers for...

Are you going?

24 people going

Share:
Location image of event venue

Details

**NEW LOCATION! Meet us at H Bar on Queen St. West**

Arun Raja will be presenting “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding” by Jacob Devlin, et al.

Bidirectional Encoder Representations from Transformers or BERT is a new language representation model created by scientists at Google AI. The X-factor of BERT is applying the concept of bidirectional training to preserve context from and left and right to language modelling which makes it the state-of-the-art language model.

Paper: https://arxiv.org/pdf/1810.04805.pdf

Arun Raja is a National Science Scholar from Singapore. He is an undergraduate Computer Science student at the University of Edinburgh. He is currently a third-year exchange student at the University of Toronto. He is passionate about research in Natural Language Processing.