Can Subject Matter Experts’ Ratings of Statement Extremity Be Used to Streamline the Development of Unidimensional Pairwise Preference Scales?

Document Type


Publication Date



computer simulation procedures, Monte Carlo, bootstrapping, quantitative research, item response theory, measurement models

Digital Object Identifier (DOI)



Interest in on-demand noncognitive assessment has flourished due to advances in computer technology and studies demonstrating noteworthy predictive validities for organizational outcomes. Computerized adaptive testing (CAT) based on the Zinnes-Griggs (ZG) ideal point item response theory (IRT) model may hold promise for organizational settings, because a large pool of items can be created from a modest number of stimuli, and the items have been shown to be resistant to some types of rater bias. However, sample sizes needed for marginal maximum likelihood (MML) estimation of statement parameters are quite large and could thus limit usefulness in practice. This article addresses that concern and its ramifications for CAT. Specifically, we conducted empirical and simulation studies to examine whether subject matter expert (SME) ratings of statement extremity (location) can be substituted for MML estimates to streamline test development and launch. Results showed that error in SME-based location estimates had little detrimental effect on score accuracy or validity, regardless of whether measures were constructed adaptively or nonadaptively. Implications for research involving small samples and CAT in field settings are discussed.

Was this content written or created while at USF?


Citation / Publisher Attribution

Organizational Research Methods, v. 14, issue 2, p. 256-278