Adaptive Learning in Simulations & Games:Aspects of the response scoring process

This is a past event

123 people went

Location image of event venue

Details

Let's meetUp and talk about how to use messy user-interaction (log file or clickstream) data from a Natural Selection app to auto-score specific responses using a rules-based scorer and a machine learning classifier.

Abstract

The first step towards adaptive learning in simulations and games is being able to rationalize what the student has done—to articulate the evidence rules as described by an evidence-centered design approach. The subsequent response scoring process gives us the raw data for input into statistical models for adaptive learning. As we move towards three dimensional science assessment as defined by the Next Generation Science Standards we need new automated methods for translating digital work products to scored observable variables.

I will present aspects of the response scoring process we are building at Amplify Science—a brand new, digitally-enhanced core curriculum for K-8 science. Students use web-based scientific simulations to explore, experiment with, and model complex scientific phenomena. Automated scoring of responses to these complex performance tasks is non-trivial because there are many potential features to score for any one response or process. I will show how we use messy user-interaction (log file or clickstream) data from a Natural Selection app to auto-score specific responses using a rules-based scorer and a machine learning classifier. I estimate task validity using a comparative analysis of content-expert ratings, evidence rule scoring, and machine learning approaches. The machine learning approaches are shown to agree with expert human scoring as well or better than human-written scoring rules. We are continuing to develop our approach to automated scoring using additional validation and scoring methods.

Bio

Samuel Crane is the analytics lead and a Project Principal on the K-8 Science curriculum product at Amplify. He holds a Ph.D. in Ecology, Evolutionary Biology, and Behavior from CUNY. His dissertation work on computational biology was conducted at the American Museum of Natural History and the Sackler Institute for Comparative Genomics. With Amplify, he’s now developing novel analytical approaches to measuring student achievement on technology-enhanced assessments, curriculum apps, and digital simulations using a variety of approaches including item response theory, data mining, and machine learning.