Communicating With The Public About Machine Scoring: What Works, What Doesn’t
The ProblemWith online testing, there is an increasing demand to return scores quickly at lower cost and increasing evidence of the ability of automated scoring systems to accurately score papers. Thus, many assessment programs are adopting or considering adopting automated scoring as a replacement or complement to human scoring.
In this white paper, co-authored with Mark Shermis, formerly dean and professor at the University of Houston—Clear Lake, we present case studies on how to communicate the transition to automated scoring to various stakeholders. Our case studies highlighted four U.S. states (West Virginia, Louisiana, Utah, and Ohio), one Canadian province (Alberta), and one country (Australia). Based upon the experience of these jurisdictions, we present recommendations to clients who are considering adopting automated scoring.