Communicating With The Public About Machine Scoring

08/11/2020 | White Paper

communicating to group about machine learning methods
The Problem

With online testing, there is an increasing demand to return scores quickly at lower cost and increasing evidence of the ability of automated scoring systems to accurately score papers. Thus, many assessment programs are adopting or considering adopting automated scoring as a replacement or complement to human scoring.

In this white paper, co-authored with Mark Shermis, formerly dean and professor at the University of Houston—Clear Lake, we present case studies on how to communicate the transition to automated scoring to various stakeholders. Our case studies highlighted four U.S. states (West Virginia, Louisiana, Utah, and Ohio), one Canadian province (Alberta), and one country (Australia). Based upon the experience of these jurisdictions, we present recommendations to clients who are considering adopting automated scoring. 

Our Solution

The issue of adopting automated scoring, at its core, is communication. As is highlighted in the case studies, there are many people who rightfully care about the scores produced in their assessment program. The figure above presents different stakeholders in a K–12 assessment program. Each stakeholder cares about three things from the scoring engine: fairness, reliability, and validity of scores. In addition, a program that seeks to adopt automated scoring will benefit when it can clearly answer questions about automated scoring. Historically, these responses have been identified by human raters during the hand-scoring process. Because hand-scoring can take weeks, the identification and routing of student responses back to schools has not occurred quickly.

Download The White Paper



Topics:
  • Machine Learning
  • Psychometrics
  • State Assessment
  • Technology