Skip to content

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

Photo of Dave Peticolas
Hosted By
Dave P. and 2 others
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

Details

Our next paper is "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding" by Devlin, Chang, Lee, and Toutanova.

We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications.

Link to the paper: https://arxiv.org/abs/1810.04805

Afterwards we'll head over to the Pocket Pub or Cliff's for a drink and a bite.

Photo of Papers We Love PDX group
Papers We Love PDX
See more events
2540 NE Martin Luther King Jr Blvd
2540 NE Martin Luther King Jr Blvd · Portland, OR