An Examination of the Comparative Reliability, Validity, and Accuracy of Performance Ratings Made Using Computerized Adaptive Rating Scales
Document Type
Article
Publication Date
2001
Keywords
iterative paired comparison, computerized adaptive rating scales, job performance rating, method comparisons, error of measurement, validity, reliability
Digital Object Identifier (DOI)
https://doi.org/10.1037/0021-9010.86.5.965
Abstract
This laboratory research compared the reliability, validity, and accuracy of a computerized adaptive rating scale (CARS) format and 2 relatively common and representative rating formats. The CARS is a paired-comparison rating task that uses adaptive testing principles to present pairs of scaled behavioral statements to the rater to iteratively estimate a ratee's effectiveness on 3 dimensions of contextual performance. Videotaped vignettes of 6 office workers were prepared, depicting prescripted levels of contextual performance, and 112 subjects rated these vignettes using the CARS format and one or the other competing format. Results showed 23%–37% lower standard errors of measurement for the CARS format. In addition, validity was significantly higher for the CARS format (d = .18), and Cronbach's accuracy coefficients showed significantly higher accuracy, with a median effect size of .08. The discussion focuses on possible reasons for the results.
Was this content written or created while at USF?
No
Citation / Publisher Attribution
Journal of Applied Psychology, v. 86, issue 5, p. 965-973
Scholar Commons Citation
Borman, Walter C.; Buck, Daren E.; Hanson, Mary Ann; Motowidlo, Stephan J.; Stark, Stephen; and Drasgow, Fritz, "An Examination of the Comparative Reliability, Validity, and Accuracy of Performance Ratings Made Using Computerized Adaptive Rating Scales" (2001). Psychology Faculty Publications. 1072.
https://digitalcommons.usf.edu/psy_facpub/1072