Study Group: Single Headed Attention RNN

Details
Machine learning, including deep learning, is becoming increasingly relevant to everyone who works in technology. In these study group events we will collectively walk through machine learning research papers in order to better understand their theoretical motivations, practical contributions, and implementation details.
The first study group will cover a recent paper that gets close to state of the art language modeling results on a single GPU, unlike many other recent papers that use enormous amounts of computational power to train large transformer models.
** Single Headed Attention RNN: Stop Thinking With Your Head **
Paper link: https://arxiv.org/abs/1911.11423
Everyone should take the time to read the paper in detail several days before the event, and (to the extent possible) read the key references.
The event will cost $5/person - cash only, charged at the event - to help cover the cost of the meeting room.
Organizational Notes:
- We'll use meetup RSVPs to sign you in when you arrive.
- Attendee selection process: In each meetup, there is an initial open capacity for people to RSVP. We then select another batch from the waiting list (if one forms) based on profile information to have good balance/diversity. This process may be adapted in each meetup. Meetup group members with names and introduction sections properly filled out in their profiles will have priority.
- There is some overbooking. Seating capacity is about 10, with a few more seats that can be brought in and put in corners. Arrive early to get a seat if it is absolutely necessary for you; others may have to stand in corners or form their own mini-group outside the conference room, as indicated by the organizer.
Paper abstract
The leading approaches in language modeling are all obsessed with TV shows of my youth - namely Transformers and Sesame Street. Transformers this, Transformers that, and over here a bonfire worth of GPU-TPU-neuromorphic wafer scale silicon. We opt for the lazy path of old and proven techniques with a fancy crypto inspired acronym: the Single Headed Attention RNN (SHA-RNN). The author's lone goal is to show that the entire field might have evolved a different direction if we had instead been obsessed with a slightly different acronym and slightly different result. We take a previously strong language model based only on boring LSTMs and get it to within a stone's throw of a stone's throw of state-of-the-art byte level language model results on enwik8. This work has undergone no intensive hyperparameter optimization and lived entirely on a commodity desktop machine that made the author's small studio apartment far too warm in the midst of a San Franciscan summer. The final results are achievable in plus or minus 24 hours on a single GPU as the author is impatient. The attention mechanism is also readily extended to large contexts with minimal computation. Take that Sesame Street.
Looking forward to discussing this paper with you in our study group!

Study Group: Single Headed Attention RNN