Skip to content

DSAi: ML Ethics & Interpretability Special Edition @ Microsoft

Photo of
Hosted By
Paul C.
DSAi: ML Ethics & Interpretability Special Edition @ Microsoft


PLEASE NOTE: that RSVPing to this page DOES NOT GRANT YOU ACCESS to this meetup,
Spaces are limited! Due to popular demand & Microsoft Facilities Management, PLEASE SIGNUP for A TICKET via the EVENTBRITE LINK BELOW:




How do we design Ai systems that we trust?

Algorithmic Bias, Algorithmic Transparency, Technological Unemployment, Data Privacy & Algorithmic Misinformation (fake news) are just some of the issues facing the fair and ethical use of Machine Learning.

In collaboration with Microsoft for this DSAi special edition Ethics & Interpretability event - come along to learn from industry leaders how issues such as Algorithmic Bias might affect you & what is being done to address the ethical use of Machine Learning in 2019.


‘Ethics for Artificial Intelligence’

In this 20 minute presentation, Aurelie will provide a formal introduction as to what ethical and responsible AI is. Aurelie will go through the current ethical issues that have already appeared using algorithms in society and explain why ethics will play an essential role in successfully designing, deploying and using Artificial Intelligence in the years ahead.


Aurelie is a member of ‘The Australian AI Working Group’, established by Standards Australia with the aim to become the preeminent forum for exploration and discussion of AI by engaging both government and industry; using the global IEEE s’ working group for Standard P7000 ‘Model Process for Addressing Ethical Concerns During System Design’; and The European AI Alliance, a forum established by the European Commission. Separately from her initiatives around Ethics and AI, she is a practising lawyer with over 10 years’ experience in Financial Services.


Interpretable Machine Learning

Machine learning models have been getting more accurate, but also more complex over the past few years. However in many settings data scientists are often tethered to linear models or decision trees because they are easy (relatively!) to explain. Moreover developments in data ethics and governance are increasing the pressure on data scientists to explain their models, and to ensure that discrimination or other unwanted outcomes are avoided. In this talk, Anthony will outline the latest and greatest in machine learning interpretability and explain why it is a crucial part of any data scientist’s toolkit.


Anthony Tockar is director and cofounder at Verge Labs, an AI company focused on the applied side of machine learning. A jack-of-all-trades, he has worked on problems across insurance, technology, telecommunications, loyalty, sports betting and even neuroscience. He qualified as an actuary, then moved into data science, completing an MS in Analytics at the prestigious Northwestern University.

After hitting the headlines with his posts on data privacy at Neustar, he returned to Sydney to practice as a data scientist and cofounded the Minerva Collective, a not-for-profit focused on using data for social good, as well as multiple meetup groups. His key missions are to extend the reach and impact of data science to help people, and to assist Australian businesses to become more data driven.


Ethics & Interpretability Fire-Side Chat

We believe that Machine Learning Ethics is Interpretability is one of the grand issues to be effectively tackled in the coming decade.

To explore ML Ethics & Interpretability further, after the speakers have completed delivering their presentations we will open the floor to a “fire-side” chat interview where the audience is invited to ask questions open to debate.



6:00 - 6:30: Beer, Pizza, Networking
6:30 - 6:40: Intro
6:40 - 7:10: Aurelie's Talk
7:10 - 7:20: Small break
7:20 - 7:50 Anthony's Talk
7:50 - 8:15 Fire Side Chat
8:15 - 8:30 Wrap up

Needs a location