BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding


Details
Our next paper is "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding" by Devlin, Chang, Lee, and Toutanova.
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications.
Link to the paper: https://arxiv.org/abs/1810.04805
Afterwards we'll head over to the Pocket Pub or Cliff's for a drink and a bite.

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding