Skip to content

Virtual Session "Gender Bias in Machine Learning"

Photo of Iryna Pidkovych
Hosted By
Iryna P.
Virtual Session "Gender Bias in Machine Learning"

Details

To access this session, please register here: https://hubs.li/Q02Nz-Yh0

Topic: Gender Bias in Machine Learning

Speaker: Shalvi Mahajan / Senior Data Scientist / SAP SE
She is located in Munich. She is really passionate about exploring ML and AI techniques to solve real-world problems. She looks forward to solving Mathematics & business challenges. In the past, she has also worked as a software engineer and really enjoys coding like a developer. Throughout her professional career and her journey, she always looks forward to exploring ways to deliver her knowledge to a bigger audience in talks and conferences. Apart from work, she loves to travel and is either on a trip or planning to go on one!

Abstract:
AI and gender bias is a prevalent issue that impacts several aspects of our lives: from the design of products to the type of services provided to each respective gender. It often reflects and amplifies existing gender biases and stereotypes in society.

Good ML algorithms have bad gender biasing and go sexist most of the times. Even with best translation methods, you can see if a person talks about nurse, by default it is female and if it’s a doctor, then it’s considered as a male.

There are many challenging problems that we implicitly face but tend to ignore. These small biasing in minds do result in big disparity in the crowd all over the world.

One significant contributor to gender bias in ML is the use of biased training data, which can reinforce and perpetuate stereotypes. In fact, the LLMs (now being a big buzz), follows gender stereotypes in picking the likely referent of the pronoun. Natural language processing (NLP) models, such as large language models (LLMs), play a crucial role in this context. LLMs are trained on vast amounts of text data from the internet, often reflecting historical biases and societal norms. As a result, these models may inadvertently learn and reproduce gender biases present in the training data.

Addressing gender bias in ML involves a multi-faceted approach, including improving dataset diversity, refining algorithms to be more transparent and interpretable, and implementing fairness-aware techniques during model development. Additionally, there is a growing need for ethical guidelines and regulations to guide the deployment of ML systems, ensuring accountability and transparency in their decision-making processes. In this conference, we will tackle this problem and try to explore ways to solve it.

ODSC Links:
• Get free access to more talks/trainings like this at Ai+ Training platform:
https://hubs.li/H0Zycsf0
• ODSC blog: https://opendatascience.com/
• Facebook: https://www.facebook.com/OPENDATASCI
• Twitter: https://twitter.com/_ODSC & @odsc
• LinkedIn: https://www.linkedin.com/company/open-data-science
• Slack Channel: https://hubs.li/Q02zdcSk0
• Code of conduct: https://odsc.com/code-of-conduct/

Photo of ODSC Data Science Melbourne group
ODSC Data Science Melbourne
See more events