What we're about

Welcome to the official Deep Learning meetup in Austin, Texas. Deep learning offers exciting solutions to an array of computer vision, natural language processing, and other data science problems, made possible by faster computing, richer data sets, and open source frameworks.

We invite talks from machine learners and data scientists applying deep learning to solve problems along with tutorials and lessons learned. Talks are open to all Deep Learning frameworks (e.g., TensorFlow, Keras, PyTorch, etc.) Please, no sales pitches.

• Join our Slack (https://austin-deep-learning-slack.herokuapp.com/)

• Participate in the selection of papers for the journal club (https://trello.com/b/gi5EJlzy/austin-deep-learning-journal-club)

If you would like to sponsor our meetup or give a talk, please let us know.

About our sponsors:

KUNGFU.AI (www.kungfu.ai) is an artificial intelligence agency, helping companies start and accelerate AI programs by providing strategy and development services.

Capital Factory (www.capitalfactory.com) is the center of gravity for entrepreneurs in Texas.

Walmart Technology (www.bit.ly/2WJO1fC) isn’t your traditional Walmart. We’re inventing the future of retail and our team is rapidly innovating to deliver it at our facilities across Texas and Arkansas. In these spaces, you’ll find eager technologists exploring emerging capabilities like machine learning, artificial intelligence, and natural language processing – just to name a few. Be sure to check our open positions to kickstart your career at Walmart and follow us on Twitter to keep up with our events.

VARIdesk (www.varidesk.com/) is passionate about helping people get more done and feel better doing it. Their line of sit-stand desks and active office products continues to grow, and our customers now span the globe.

Upcoming events (1)

Journal Club: Attention is All You Need (Transformers)

Attention is All You Need (Transformers) (https://trello.com/c/ipI80uy0/326-attention-is-all-you-need-transformers) Abstract: "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data." Austin Deep Learning Journal Club is group for committed machine learning practitioners and researchers alike. The group meets every other Tuesdays of each month to discuss research publications. The publications are usually the ones that laid foundation to ML/DL or explore novel promising ideas and are selected by a vote. Participant are expected to read the publications to be able to contribute to discussion and learn from others. This is also a great opportunity to showcase your implementations to get feedback from other experts. Anyone can suggest and vote for the next paper on Austin Deep Learning slack work space (#paper_group channel): https://austin-deep-learning-slack.herokuapp.com/ Please only RSVP if you are certain that you will be participating. What to bring: a copy of the paper (either digital or hardcopy)

Photos (27)