[UCL WI Talks]: Gender Fairness in Information Retrieval Systems
Details
Abstract:
Recent studies have shown that it is possible for stereotypical gender biases to find their way into representational and algorithmic aspects of retrieval methods; hence, exhibit themselves in retrieval outcomes. In this talk, we go over studies that have systematically reported the presence of stereotypical gender biases in Information Retrieval systems. We further classify existing work on gender biases in IR systems as being related to (1) relevance judgement datasets, (2) structure of retrieval methods, and (3) representations learnt for queries and documents. We present how each of these components can be impacted by or cause intensified biases during retrieval. Additionally, the evaluation metrics that can be used for measuring the level of bias and utility of the models, and de-biasing methods that can be leveraged to mitigate gender biases within those models would be covered to some extent.
Bio:
Negar is a PhD student at the University of Waterloo, supervised by Dr. Charles Clarke. She has been conducting research in Information Retrieval and Natural Language Processing for over 5 years as a graduate student and research assistant at Toronto Metropolitan University and the University of Waterloo. Her research interests are aligned with Ad-hoc Retrieval and fairness evaluation IR and NLP. Specifically, she presented tutorials on fairness and evaluation in information retrieval in SIGIR 2022, WSDM 2022, and ECIR 2023. Negar has also completed research-oriented internships at Microsoft Research, Spotify Research, and Google Brain. Additionally, she was one of the lead organizers of the NeurIPS IGLU competition on Interactive Grounded Language Understanding in a Collaborative Environment.
