Using AI to Identify At-Risk Student Responses

10/09/2019 | White Paper


The Problem

In large-scale assessments, student responses that contain content that indicates immediate or potential harm are identified and routed to our clients for potential intervention to help ensure the safety of a student or others. Fortunately, these responses occur very rarely. This rarity, however, does not belittle the importance in identifying them.

Historically, these responses have been identified by human raters during the hand-scoring process. Because hand-scoring can take weeks, the identification and routing of student responses back to schools has not occurred quickly.

Our Solution

Cambium Assessment’s Hotline alert system automates this process by reviewing each response and flagging any potentially troubling responses. Responses flagged by Hotline are flagged for professional review, and any response deemed to be at risk is then routed to our clients. This automated system helps to ensure that responses are routed to clients very quickly, often within 24–48 hours, which means state or school personnel can intervene quickly.

Machine learning systems such as Hotline mimic hand-scoring. To learn how to identify an at-risk student response, we train Hotline on human-verified responses. In this white paper, we use qualitative methods to further refine what constitutes an at-risk response and what constitutes a concerning (but perhaps not at-risk) response. We then examine how well the raters are able to distinguish between at-risk responses, concerning responses, and all other responses. Finally, we examine the extent to which machine learning methods can mimic human scoring.

Key findings from the study include:
  • Further identification of at-risk responses
  • Validation that professionally trained and monitored raters can accurately identify alerts
  • Evidence that machine learning can mimic this scoring

Download The White Paper