Skip to content

Fairness in search and recommendation

Photo of ali vardasbi
Hosted By
ali v.
Fairness in search and recommendation

Details

In this edition of SEA (brought forward one week due to SIGIR deadline), we will discuss fairness in search and recommendation. We have two amazing speakers lined up: Bhaskar Mitra from Microsoft Research and Amifa Raj from Boise State University.

This will be a hybrid event, the in-person event will take place at Lab42, Science Park, room L3.35.

***
IMPORTANT: You will be able to view the Zoom link once you 'attend' the meetup on this page.
***

17.00: Bhaskar Mitra (Microsoft Research)

Title: Joint Multisided Exposure Fairness for Search and Recommendation
Abstract: Online information access systems, like recommender systems and search, mediate what information gets exposure and thereby influence their consumption at scale. There is a growing body of evidence that information retrieval (IR) algorithms that narrowly focus on maximizing ranking utility of retrieved items may disparately expose items of similar relevance from the collection. Such disparities in exposure outcome raise concerns of algorithmic fairness and bias of moral import, and may contribute to both representational harms—by reinforcing negative stereotypes and perpetuating inequities in representation of women and other historically marginalized peoples—and allocative harms, from disparate exposure to economic opportunities. In this talk, we present a framework of exposure fairness metrics that model the problem jointly from the perspective of both the consumers and producers. Specifically, we consider group attributes for both types of stakeholders to identify and mitigate fairness concerns that go beyond individual users and items towards more systemic biases in retrieval.

17.30: Amifa Raj (Boise State University)
Title: Measuring fairness in ranked results: An analytical and empirical comparison
Abstract: Information access systems, such as search and recommender systems, often use ranked lists to present results believed to be relevant to the user’s information need. Evaluating these lists for their fairness along with other traditional metrics provides a more complete understanding of an information access system’s behavior beyond accuracy or utility constructs. To measure the (un)fairness of rankings, particularly with respect to the protected group(s) of producers or providers, several metrics have been proposed in the last several years. However, an empirical and comparative analyses of these metrics showing the applicability to specific scenario or real data, conceptual similarities, and differences is still lacking. We aim to bridge the gap between theoretical and practical application of these metrics. In this talk we describe several fair ranking metrics from the existing literature in a common notation, enabling direct comparison of their approaches and assumptions, and empirically compare them on the same experimental setup and data sets in the context of three information access tasks.

SEA talks #234 and #235.

Photo of SEA: Search Engines Amsterdam group
SEA: Search Engines Amsterdam
See more events