Skip to content

Removing Unfair Bias in Machine Learning

Photo of Ji Dong
Hosted By
Ji D. and Sou-Cheng T. C.
Removing Unfair Bias in Machine Learning

Details

We appreciate IBM's sponsorship to this upcoming meetup. This meetup with be host online. You will receive the event Zoom link after RSVP.

Abstract:

Extensive evidence has shown that AI can embed human and societal bias and deploy them at scale. And many algorithms are now being reexamined due to illegal bias. So how do you remove bias & discrimination in the machine learning pipeline?
In this webinar you'll learn the debiasing techniques that can be implemented by using the open source toolkit AI Fairness 360.

AI Fairness 360 (AIF360, https://aif360.mybluemix.net/) is an extensible, open source toolkit for measuring, understanding, and removing AI bias. AIF360 is the first solution that brings together the most widely used bias metrics, bias mitigation algorithms, and metric explainers from the top AI fairness researchers across industry & academia.

In this meetup you'll learn:

How to measure bias in your data sets & models
How to apply the fairness algorithms to reduce bias
How to apply a practical use case of bias measurement & mitigation

Speaker Bio:

Trisha Mahoney
Sr. AI Tech Evangelist IBM
Trisha Mahoney is an AI Tech Evangelist for IBM with a focus on Fairness & Bias. Trisha has spent the last 10 years working on Artificial Intelligence and Cloud solutions at several Bay Area tech firms including (Salesforce, IBM, Cisco). Prior to that, Trisha spent 8 years working as a data scientist in the chemical detection space. She holds an Electrical Engineering degree and an MBA in Technology Management.

Photo of PyData Chicago group
PyData Chicago
See more events