Past Meetup

Rotational Invariance in Convolutional Neural Networks

This Meetup is past

31 people went

Every 2 months on the 4th Wednesday

Location image of event venue

Details

Welcome to the DC/NoVA Papers We Love meetup!

Papers We Love is an international organization centered around the appreciation of computer science research papers. There's so much we can learn from the landmark research that shaped the field and the current studies that are shaping our future. Our goal is to create a community of tech professionals passionate about learning and sharing knowledge. Come join us!

New to research papers? Watch The Refreshingly Rewarding Realm of Research Papers (https://www.youtube.com/watch?v=8eRx5Wo3xYA) by Sean Cribbs.

Ideas and suggestions are welcome–fill our our interest survey here (https://docs.google.com/forms/d/e/1FAIpQLSeJwLQhnmzWcuyodPrSmqHgqrvNxRbnNSbiWAuwzHwshhy_Sg/viewform) and let us know what motivates you!

// Tentative Schedule

• 7:00-7:15–Informal paper discussion

• 7:15-7:25–Introduction and announcements

• 7:25-8:40–Exploiting Cyclic Symmetry in Convolutional Neural Networks (https://arxiv.org/pdf/1602.02660.pdf) and Discussion led by Philip Leclerc

• 8:40-9:00–Informal paper discussion

// Directions

CustomInk Cafe (3rd Floor)
Mosaic District, 2910 District Ave #300
Fairfax, VA 22031

When you get here you can come in via the patio. Don't be scared by the metal gate and sign. It's accessible via the outside stairs near True Food. There is a parking garage next door for those coming by vehicle. And, there is a walkway to the patio on the 3rd floor of the garage nearest moms organic market.

Metro: The Dunn Loring metro station is about 0.7 miles from our meetup location. It’s very walkable, but if you’d prefer a bus, the 402 Southbound and 1A/1B/1C Westbound leave from Dunn Loring Station about every 5-10 minutes (see a schedule for more detailed timetable).

If you're late, we totally understand–please still come! (via the patio is best) Just be sure to slip in quietly if a speaker is presenting

// Papers

We'll discuss the 2016 paper Exploiting Cyclic Symmetry in Convolutional Neural Networks (https://arxiv.org/pdf/1602.02660.pdf), which studies the effects on test-set accuracy of directly encoding rotational invariance into the architecture of a convolutional neural network, similar to the more common practice of directly embedding translation invariance into convolutional neural nets. We'll briefly review what artificial neural networks are broadly, what convolutional neural nets are specifically, and then we'll dig a bit into the mathematical guts of Dieleman et al's models in order to understand how they encode rotation invariance directly into neural-net architectures. We'll then discuss the empirical evidence Dieleman et al share on their neural nets' performance, the advantages and disadvantages of their procedure, and possible generalizations to their approach before opening conversation up to freely meander. We may also take a brief look at some actual examples of coding, computing with, and training rotationally invariant neural nets, if prep time allows.