XLNet is a new method for NLP from Google Brain that was released on June 19, 2019. XLNet significantly improves upon BERT on 20 tasks and achieves state-of-the-art results on 18 tasks including question answering, natural language inference, sentiment analysis, and document ranking. For example, in SQuAD 2.0, XLNet achieved 86% in exact match, while BERT only reached 79%. We will review XLNet and its difference from BERT.
Papers to read:
Yang et al. "XLNet: Generalized Autoregressive Pretraining for Language Understanding." arXiv:[masked] (June 2019). https://arxiv.org/abs/1906.08237
Dai et al. "Transformer-xl: Attentive language models beyond a fixed-length context." ACL 2019. https://arxiv.org/pdf/1901.02860
Github (code + pretrained models): https://github.com/zihangdai/xlnet
You can attend in person or remotely. If you attend remotely, register here: https://zoom.us/webinar/register/WN_j29BAwTlTOmGl8ReJy2dmA
You will receive a link to join the live presentation
This is part of weekly reading series. We come together to discuss the cutting-edge AI topics and papers. One paper is selected each week as the major discussion topic. The meeting is a mixture of presentation and white boarding, with group discussion and participation. Bring your questions and get answered. Socialize with other like-minded people.
6:30-7pm Meet and greet
7-8pm Group discussion
8-8:30 Additional social