Skip to content

SEA: Cognitive biases in recommenders

Photo of ali vardasbi
Hosted By
ali v. and Pooya K.
SEA: Cognitive biases in recommenders

Details

In this edition of SEA we will discuss cognitive biases and nudging in recommendation systems. We have two amazing speakers lined up: Yu Liang (Eindhoven University of Technology) and Atanu Sinha (Adobe Research).
This will be a hybrid event, the in-person event will take place at Lab42, Science Park, room L3.36.
***
IMPORTANT: You will be able to view the Zoom link once you 'attend' the meetup on this page.
***
17.00: Yu Liang (Eindhoven University of Technology)
Title: Promoting Music Exploration through Personalized Nudging in a Genre Exploration Recommender
Abstract: Recommender systems are efficient at predicting users’ current preferences, but how users’ preferences develop over time is still under-explored. In this work, we study the development of users’ musical preferences. Exploring musical preference consistency between short-term and long-term preferences in data from earlier studies, we find that users with higher musical expertise have more consistent preferences for their top-listened artists and tags than those with lower musical expertise, while high expertise users also show more diverse listening behavior. Users typically chose to explore genres that were close to their current preferences, and this effect was stronger for expert users. Based on these findings, we conducted a user study on genre exploration to investigate (1) whether it is possible to nudge users to explore more distant genres, (2) how users’ exploration behavior within a genre is influenced by default recommendation settings that balance personalization with genre representativeness in different ways, and (3) how nudging for exploration increases the perceived helpfulness of recommendations for users to explore. Our results show that users were more likely to select the more distant genres if these genres were nudged by presented at the top of the list, however, users with high musical expertise were less likely to do so. We also find that the more representative default slider, which by default recommended more genre-representative tracks, made users set the slider at a less personalized level. The more representative slider position alone did not promote more exploration of users’ current preferences but when combined with the nudge for distant genres, the more representative slider positions effectively nudged users to explore. Nudging to explore does not necessarily lead to an increase in perceived helpfulness. On the one hand, nudging for exploration improves the perceived helpfulness for exploration by making users explore away from their current preferences. On the other hand, it reduces helpfulness due to lower perceived personalization as users move away from their personalized recommendations. To improve perceived helpfulness, it seems necessary to provide a balanced trade-off between exploration and personalization.
***
***
17.30: Atanu Sinha (Adobe Research)
Title: Personalized Detection of Cognitive Biases in Actions of Users from Their Logs
Abstract: Cognitive biases are mental shortcuts humans use in dealing with information and the environment, and which result in biased actions and behaviors (or, actions), unbeknownst to themselves. Biases take many forms, with cognitive biases occupying a central role that inflicts fairness, accountability, transparency, ethics, law, medicine, and discrimination. Detection of biases is considered a necessary step toward their mitigation. Herein, we focus on two cognitive biases - anchoring and recency. The recognition of cognitive bias in computer science is largely in the domain of information retrieval, and bias is identified at an aggregate level with the help of annotated data. Proposing a different direction for bias detection, we offer a principled approach along with Machine Learning to detect these two cognitive biases from Web logs of users' actions. Our individual user level detection makes it truly personalized, and does not rely on annotated data. Instead, we start with two basic principles established in cognitive psychology, use modified training of an attention network, and interpret attention weights in a novel way according to those principles, to infer and distinguish between these two biases. The personalized approach allows detection for specific users who are susceptible to these biases when performing their tasks, and can help build awareness among them so as to undertake bias mitigation.
Just keep counting: SEA talks #242 and #243.

Photo of SEA: Search Engines Amsterdam group
SEA: Search Engines Amsterdam
See more events