What we'll do
**This is a new paper from Google Brain, coming out in April this year and presented in ICCV 2019.
Convolutional networks have been the choice for computer vision applications. The convolution operation however has a significant weakness in that it only operates on a local neighborhood, thus missing global information. Self-attention, on the other hand, has emerged as a recent advance to capture long range interactions, but has mostly been applied to sequence modeling and generative modeling tasks. This paper introduces self-attention for discriminative visual tasks as an alternative to convolutions. Combining convolutions and self-attention leads to consistent improvements in image classification on ImageNet and object detection on COCO across many different models and scales, including ResNets and a state-of-the-art mobile constrained network.
Paper to read:
Bello, Irwan, Barret Zoph, Ashish Vaswani, Jonathon Shlens, and Quoc V. Le. "Attention augmented convolutional networks." ICCV 2019. https://arxiv.org/pdf/1904.09925
Ramachandran P, Parmar N, Vaswani A, Bello I, Levskaya A, Shlens J. Stand-Alone Self-Attention in Vision Models. arXiv:[masked]. 2019 Jun 13. https://arxiv.org/pdf/1906.05909
Presenter: Junling Hu
This is part of the bi-weekly reading series. We come together to discuss cutting-edge AI topics and papers. One paper is selected as the major discussion topic. The meeting is led by one presenter, with group discussion and participation. Bring your questions and get answered. Socialize with other like-minded people.
6:30-7pm Meet and greet
7-8pm Paper presentation and group discussion
8-8:30 Additional social