The Structure of Recurrent Neural Networks in Natural Language Processing


Details
Welcome to the DC/NoVA Papers We Love meetup!
Papers We Love is an international organization centered around the appreciation of computer science research papers. There's so much we can learn from the landmark research that shaped the field and the current studies that are shaping our future. Our goal is to create a community of tech professionals passionate about learning and sharing knowledge. Come join us!
New to research papers? Watch The Refreshingly Rewarding Realm of Research Papers (https://www.youtube.com/watch?v=8eRx5Wo3xYA) by Sean Cribbs.
Ideas and suggestions are welcome–fill our our interest survey here (https://docs.google.com/forms/d/e/1FAIpQLSeJwLQhnmzWcuyodPrSmqHgqrvNxRbnNSbiWAuwzHwshhy_Sg/viewform) and let us know what motivates you!
// Tentative Schedule
• 7:00-7:30–Informal paper discussion
• 7:30-7:35–Introduction and announcements
• 7:35-8:40– The emergent algebraic structure of RNNs and embeddings in NLP (https://arxiv.org/abs/1803.02839), written and presented by Sean A. Cantrell, PhD
• 8:40-9:00–Informal paper discussion
// Directions
CustomInk Cafe (3rd Floor)
Mosaic District, 2910 District Ave #300
Fairfax, VA 22031
When you get here you can come in via the patio. Don't be scared by the metal gate and sign. It's accessible via the outside stairs near True Food. There is a parking garage next door for those coming by vehicle. And, there is a walkway to the patio on the 3rd floor of the garage nearest moms organic market. If you'd prefer to take the elevator than the stairs, the elevator in the parking garage will be the easiest to use.
Metro: The Dunn Loring metro station is about 0.7 miles from our meetup location. It’s very walkable, but if you’d prefer a bus, the 402 Southbound and 1A/1B/1C Westbound leave from Dunn Loring Station about every 5-10 minutes (see a schedule for more detailed timetable).
If you're late, we totally understand–please still come! (via the patio is best) Just be sure to slip in quietly if a speaker is presenting.
// Papers
Sean will be presenting his own paper. About it, he says:
My paper is mostly part of the effort to demystify the neural network black box. I used a basic GRU + fully connected layer to classify Tweets by their accounts of origin. It turns out word embeddings trained end-to-end in this framework parameterize a Lie group and RNNs form a nonlinear representation of the group. The upshot: RNNs in NLP have a structure we have understood for over a century. I propose an interpretation of the action of words on the internal state in the RNN, and propose a new word embedding scheme.
The paper is available at https://arxiv.org/abs/1803.02839 and he has a reduced and simplified version that I will link to in the comments.

The Structure of Recurrent Neural Networks in Natural Language Processing