Skip to content

Details

CROWD SCIENCE SEMINAR #5

Description:

Human decision-making systems are the backbone of many applications: crowdsourcing, peer review, hiring, and employee performance evaluation. However, if not organized properly, such systems may produce unexpected results which degrade the overall quality of the process. This talk will present principled approaches towards design and evaluation of large human decision-making systems, focusing on important aspects of:

  1. Noise. How do we compensate for the mistakes of individual agents involved in the system? Primary application: crowdsourcing.
  2. Strategic behavior. How do we ensure that agents behave honestly and do not engage in strategic manipulations? Primary applications: peer grading, employee performance evaluation.
  3. Bias. How do we evaluate the impact of various biases (subtle cognitive biases, race/gender biases) on the decisions made? Primary application: peer review.

Ivan Stelmakh is a PhD candidate in the Machine Learning Department at Carnegie Mellon University. His research interests lie in the broad area of Human-AI collaboration and he has published works at various venues, including NeurIPS, AAAI, and CSCW. Ivan served as a workflow chair of the ICML 2020 conference and he is an editor of the ML@CMU blog.

Members are also interested in