- Monthly MLAI Meetup
Richard Wang - generative adversarial nets (GANs) Wei (Richard) Wang will be introducing generative adversarial nets (GANs) - a deep learning model that has various and often surprising applications in computer vision and natural language processing. He will explain his recent work on improving the training of GANs for image generation tasks (accepted to ICLR 2019). Richard recently received his Ph.D. degree from the University of Melbourne where he worked on several generative models: principal component analysis, Gaussian mixture models and generative adversarial nets. Mat Kelcey - Practical Learning to Learn Mat Kelcey is a machine learning consultant at ThoughtWorks, and will be discussing some of the core concepts of gradient descent when training a model on a single large dataset. His talk will also discuss a couple of methods, using the same concepts, to train models from a large number of small, but related, datasets.
- April MLAI Meetup: Patrick Robotham on Hyperparameter Optimisation
Speaker: Patrick Robotham Subject: Patrick will be speaking about the ins and outs of practical Hyperparameter optimisation.
- March MLAI Meetup: Office Hours
This month will be "office hours" which is a really great way to learn something and get to know each other. We split into small groups based on topic and discuss, teach and learn together. There will be a time allocated at the start to mingle; there will be paper up the front where you can write down: 'things you would like to learn' and 'things that you can explain or teach'. We'll form groups for topics that have been written up, or just keep going in groups that have formed on their own. We'll change over once, so you'll be part of two different groups/topics. There will also be some quick announcements from Andy Gelme and Laura Summers.
- Monthly MLAI Meetup
"Towards Socially-Aware Autonomous Vehicles from Vulnerable Road Users’ Perspective" by Khaled Saleh Khaled is a researcher at Deakin University, and will be discussing challenges that face autonomous/"self-driving" vehicles when interacting with vulnerable road users (pedestrians, cyclists, etc). He'll dive into the role of machine learning/deep learning techniques for tackling such challenges. "How much data do you need to train a classifier?" by Noon Silk Noon is a programmer with a background in maths, functional programming, and fashion design. The presentation will take the form of a 10-minute quiz, with audience questions and comments definitely welcome. "Donkey Car: Scale-model, self-driving cars: Introduction, workshops and track days" by Andy Gelme From humble beginnings around April 2016 (?) selfracingcars.com spawned diyrobocars.com in November 2016. That monthly meet-up now attracts hundreds of participants and has inspired track days around the world, including Melbourne. These relatively inexpensive vehicles are a fun way to get into Machine Learning and robotics. donkeycar.com is one of the most popular vehicles to build and race. In addition to a technical introduction, this brief presentation will also provide details on upcoming workshops and track days in Melbourne. Bio: Andy is typically hacking at the intersection of the digital and physical world.
- Monthly MLAI Meetup
Lightning talks! Real world considerations for ML product delivery - Anton de Weger is Application Machine Learning Product Manager for cloud-based enterprise software company Workday, and will be talking about some of the real world issues around ML product development. The state of ML on the edge - why DL isn't a golden bullet - Karthik Rajgopal is co-founder, director and head of Black.ai's machine learning team. - Owns an elephant (seriously) and puts pepper in his smoothies. - Feel free to reach out to him if you need help with: a paper you are trying to implement, motorbike repairs, the gym. Others TBA
- November MLAI Meetup — Eike Germann, Introduction to Reinforcement Learning
Eike Germann, Introduction to Reinforcement Learning What is Reinforcement Learning? How is it different from the machine learning we're familiar with? Why do it? I'll present some foundational ideas (Markov decision processes, policy iteration, value iteration etc) and talk about their limitations. What algorithms are currently used to address those limitations and how do they do it? Based on these, I’ll give a short overview of what RL is used currently used for - from training a machine to play space invaders to robotic movement in the real world. As a way into RL, I’ll introduce the Open AI Gym package, which is trying to provide a platform for RL to benchmark developments. To round up the presentation, I'll have a little demonstration of a simulated system trained with reinforcement learning. Bio: I came to Australia a bit more than 10 years ago as an audio engineer working in corporate AV and decided to switch careers when it got a little boring. Because it's definitely not boring, I wound up doing a physics degree at Monash University. While I was there the effective altruist crowd got me interested in data science and I started doing data science courses, visiting meetups and decided that my future career would be in that field. Hence after finishing my honours degree last year, I'm now working as a data scientist at Eliza.ai :)
- Special Event: A.I. made Actually Intelligible with Andy Kitchen
A.I. made Actually Intelligible: A thinking person's guide to Deep Learning With speaker RMIT Alumnus Andy Kitchen and MC'd by CSIT Organizer Michael Swiatkiwsky. Presented by The Machine Learning and AI Meetup, RMIT CSIT Society and RMIT School of Science. Learn how deep learning works: no programming or maths needed. Find out what's possible with AI now, and what will be possible very soon. There will be live demos and plenty of visualisations. Andy Kitchen is an AI Researcher at CCLabs.ai A Melbourne startup making connected medical devices. Previously Silverpond, where he started the deep learning consulting team. Co-founder of Intrascope, a startup doing data science on networks. His current research is on explaining consciousness from physical principles. He's also worked on teaching neural networks to bluff, diagnosing X-ray images, and finding sharks from drones. Drinks and networking from 6.00pm. Talk will starts at 6.30pm.
- September MLAI Meetup: Elizabeth Silver on Causality
Bio: Lizzie Silver is a Research Fellow at the University of Melbourne, where she analyses data for The SWARM Project, an effort to improve reasoning in human teams. She did her PhD at Carnegie Mellon University, on multitask methods for learning causal graphical models. Talk description: A tutorial & overview of methods for learning causal graphical models. We need causal models in several situations: to figure out which variables are confounders in observational studies; to suggest new experiments; and to predict the results of interventions if experiments are infeasible or unethical. I'll show you how to learn a causal model from observational data, and I'll show you the strengths and weaknesses of the methods. Graphical models have a structure and a parameterisation. The structure represents the qualitative causal relationships - "what causes what". The parameterisation represents the strength and functional form of those relationships. You need to know the structure before you can learn the parameterisation. I'll cover various algorithms for learning the model structure from observational data: 1. The PC algorithm ("Peter and Clark"), and PC-Stable 2. GES (Greedy Equivalence Search) 3. FCI (Fast Causal Inference) 4. LiNGAM (Linear Non-Gaussian Additive Models) 5. If we have time, more fancy stuff: latent variable models, non-linear causal additive models, SAT-solver methods, etc. And I'll describe some of the major difficulties with causal inference: * Validation * Consistency * Feature definition * Measurement error