Paper Discussion: XLNet: Generalized Autoregressive Pretraining for Language

Canberra Deep Learning Meetup
Canberra Deep Learning Meetup
Public group

C&Ma House

22 Napier Cl · Deakin

How to find us

We are on the top level (Unit 3) conference room inside the Trellis Data (sponsor) office.

Location image of event venue

What we'll do

From the abstract:
With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining ap- proaches based on autoregressive language modeling. However, relying on corrupt- ing the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining. Empirically, XLNet outperforms BERT on 20 tasks, often by a large margin, and achieves state-of-the-art results on 18 tasks including question answering, natural language inference, sentiment analysis, and document ranking.


If we have time, we will also compare XLNet, BERT with RoBERTa (sic)

Briefly read the paper or at least the abstract.
Bring a laptop for reference.

This event is sponsored by Trellis Data (