Algorithmic Stereotyping: Overview and Key Considerations for More Ethical AI
Details
The public and government regulators are increasingly concerned about discriminatory, biased, and unfair outputs from machine learning models. Stereotyping, or ascribing common traits to all individuals in a group, is a key risk for such models, which are designed to generate predictions based on feature similarities. This talk provides an overview of algorithmic stereotyping in regression and classification models. Using synthetic data, I demonstrate that standard fairness metrics are unable to distinguish stereotyping from decisions based on arguably reasonable factors. Therefore, rather than tuning fairness metrics, I suggest a “due diligence” process to assess your model for disparities, explain features driving differences, and consider missing information which might improve fairness. This talk will provide an overview of common fairness metrics, discuss explainability techniques that can provide detailed information about group differences, and suggests a structured report format suitable for stakeholders or oversight committees.
NOTE TO ATTENDEES: This meeting is password protected. A message will be sent out with password a few days in advance of the meeting. The link can be viewed on meetup and will also be sent out in the message.
