In higher education, many institutions use algorithmic alerts to flag at-risk students and deliver advising at scale. While much research has focused on the accuracy of algorithmic alert systems, less is known about how advisors translate risk predictions into effective interventions.
In this talk, Kara will present evidence that advisors' discretionary judgments in acting on predictions may be key to these tools' success, using rich quantitative and qualitative data from a randomized controlled trial of an algorithm-assisted advising program at Georgia State University, the Monitoring Advisor Analytics to Promote Success (MAAPS) experiment. We present a framework combining causal modeling, statistical testing, and qualitative analysis to study one kind of discretionary judgment by advisors, the incorporation of non-algorithmic context to "expertly" target interventions to students, in the MAAPS experiment.
The results suggest advisors incorporate diverse non-algorithmic information into their intervention decisions, with an estimated 2 in 3 interventions in the MAAPS experiment that may have been "expertly targeted" by advisors. The findings shed light on the role of advisor expertise in a real-world algorithm-assisted advising program and underscore the importance of accounting for human discretion in the design, evaluation, and implementation of algorithmic decision systems in education.
This talk is based on joint work with Sofiia Druchyna, Benjamin Brandon, Jenise Stafford, Hannah Li, and Lydia Liu.