Automated Scoring Performance on the NAEP Automated Scoring Challenge: Item‐Specific Models
The Problem
Cambium Assessment, Inc. (CAI) participated in the National Assessment of Educational Progress (NAEP) Automated Scoring (AS) Challenge, in which NAEP provided items, scoring materials, and data to participating organizations in order to examine the state of the art in modelling on these items. As noted in the provided materials, “The purpose of the challenge is to help NAEP determine the existing capabilities, accuracy metrics, the underlying validity evidence of assigned scores, and costs and efficiencies of using automated scoring with the NAEP reading assessment items.” There were two challenges, one for item‐specific models (20 items) and one for generic models (2 items).
Our Solution
This report is written to address the technical portion of the competition for the item‐specific models. The report describes the methods around training transformer-based deep learning models, the approach to validating models that included performance overall and by subgroup, performance on rater qualification papers, correlation with length, and correlation with argumentation quality.
Download The White Paper
- Machine Learning
- Psychometrics
- State Assessment
- Technology