A Framework for Considering Device and Interface Features That May Affect Student Performance on the National Assessment of Educational Progress

Denny Way, College Board
,
Ellen Strain-Seymour, Pearson

An ongoing high priority for the National Assessment of Educational Progress (NAEP) is the ability to compare performance results from one year to another. A recent challenge to maintaining NAEP trends has arisen with the exploration of new testing methods and question types that reflect the growing use of technology in education. NAEP has introduced a variety of new question and task types in the assessments, including writing on computer tasks, interactive computer tasks, hybrid hands-on tasks, and scenario-based tasks from the Technology and Engineering Literacy (TEL) and reading assessments.

The NAEP Validity Studies (NVS) Panel has outlined potential studies that might be added to its research agenda to explore device effects and the impact of device familiarity on performance, including randomized trials of alternate devices, teacher surveys, expert panels, and cognitive labs.

In addition, consideration is being given to how to maintain trend in the face of constantly changing technology, with options that might range from continual bridge studies each time the delivery device or administration interface changes to a reconceptualization of what “standardization” and “trend” mean. However, these various potential studies or policy actions are difficult to prioritize because there is no organizing framework within which to evaluate them. What is needed is an elucidation of significant causal variables to guide the studies.

The purpose of this white paper is to provide a framework for considering device and interface features that may affect student performance on digital NAEP assessments, and to prioritize the variables that should be examined in further validity studies.