ML at Work #1: Adversarial Autoencoders, Insight Data Science, and more!


Details
PLEASE RSVP THROUGH EVENTBRITE LINK
https://adversarialautoencoders.eventbrite.com
Machine Learning at Work meets to discuss topics in artificial intelligence and machine learning. Our goal is to bring together those actively working, studying, or interested in working in the AI / ML space. Attendees will share ideas, projects, algorithms and results with their peers in NYC. Practical considerations for AI implementations are the focus of this meetup, and members are encouraged to bring a laptop and be prepared to write / modify some code.
Speakers Lineup:
• Felipe Ducau - NYU - Masters in Data Science / Machine Learning
Adversarial Autoencoders: Unsupervised approach to learn disentangled representations with adversarial learning as a key element in a Variational Autoencoder-like architecture. We will look at the model in Makhzani et al., discuss the intuition behind it and look at some sample code and implementation considerations.
• Ross Fadely - Artificial Intelligence Lead at Insight Data Science
The landscape of data careers is constantly evolving. Most recently there has been a growing demand across many industries for people with talent in AI. At Insight, we run a free 7 week Fellowship program to help people make the jump to AI careers. During the program, Fellows build AI focused projects over the span of 4 weeks. I will briefly discuss the current landscape of AI roles, followed by a deep dive into 2 projects recently completed by Insight AI Fellows. Along the way I will highlight practical tips and hurdles tackled during the projects, as well as resources for people to kick-start similar AI projects.
• Laura Graesser, Wah Loon Keng - OpenAI Lab (https://github.com/kengz/openai_lab)
Deep Q Learning: Brief introduction to reinforcement learning with the Deep Q-Learning (DQN) algorithm using the OpenAI Lab. The Lab is an experimentation system for Reinforcement Learning using the OpenAI Gym, Tensorflow, and Keras. During this session we will learn how to solve the classic cart-pole problem, reviewing the main components of the DQN algorithm and why they matter, as well as introducing the Lab.
Optimization Challenge
We will provide sample code for a fast-to-converge deep learning task. The goal of this challenge is to tune the code at will to get the lowest value of the cost at the end of 30 minutes. The playground to run the code will be provided through Paperspace machines.

ML at Work #1: Adversarial Autoencoders, Insight Data Science, and more!