Open Source Tools for Detecting Bias and Increasing Transparency in ML Models


Details
ππ PyData DC monthly virtual meetup
π
Date & Time: Thursday, July 1, 2021, 6 p.m., EDT
π Topic: Open Source Tools for Detecting Bias and Increasing Transparency in Machine Learning Models
π§ Speaker: Saishruthi Swaminathan, Technical Lead & Data Scientist at IBM
Description: Machine learning models are increasingly used to inform high-stakes decisions. Discrimination by machine learning becomes objectionable when it places certain privileged groups at the systematic advantage and certain unprivileged groups at a systematic disadvantage. Bias in training data, due to prejudice in labels and under -or oversampling, yields models with unwanted bias. This session will explore open source tools to detect/mitigate bias, increase transparency, and enable governance in ML models.
Her passion is to dive deep into the ocean of data, extract insights, and use AI for social good. Previously, she worked as a software developer. On a mission to spread the knowledge and experience, she acquired in her learning process. She also leads the education for rural children initiative and organizing meetups focusing on women empowerment. She has a master's in electrical engineering, specializing in data science, and a bachelorβs degree in electronics and instrumentation.
π Program
6:00 ππΌ Welcoming remarks
6:05 π£ Announcements
6:15 π§ Presentation
6:45 β Q&A
6:55 π» Virtual happy hour BYOB
We'll keep the zoom meeting open for a few π so we can all connect with each other.
*
*
*
Free ππ Food for ππ Attendees!
We're super excited to offer "virtual food!" I'm sure you all remember that one of the perks of attending a meetup IRL was the free food. If you attend the event, you will receive a $15 gift card to DoorDash! We'll be handing them out right before the talk begins via zoom call DMs on a first come first served basis (one per attendee). We hope this makes it even better to kick back and enjoy our meetups on a Thursday evening.

Open Source Tools for Detecting Bias and Increasing Transparency in ML Models