Graduation Year

2021

Document Type

Thesis

Degree

M.A.

Degree Name

Master of Arts (M.A.)

Degree Granting Department

Psychology

Major Professor

Stephen Stark, Ph.D.

Committee Member

Seang-Hwane Joo, Ph.D.

Committee Member

Brenton M. Wiernik, Ph.D.

Committee Member

Marina A. Bornovalova, Ph.D.

Keywords

Markov chain Monte Carlo (MCMC), Item Response Theory (IRT), Generalized Graded Unfolding Model (GGUM), Forced Choice

Abstract

Multidimensional forced choice (MFC) testing has been proposed as a way of reducing response biases in noncognitive measurement. Although early item response theory (IRT) research focused on illustrating that trait scores with normative properties could be obtained using various MFC models and formats, more recent attention has been devoted to exploring the processes involved in test construction and how that influences MFC scores. This research compared two approaches for estimating Multi-Unidimensional Pairwise Preference model (MUPP; Stark et al., 2005) parameters based on the Generalized Graded Unfolding Model (GGUM; Roberts et al., 2000). More specifically, we compared the efficacy of statement and person parameter estimation based on a “two-step” process, developed by Stark et al. (2005) with a more recently developed “direct” estimation approach (Lee et al., 2019) in a Monte Carlo study that also manipulated test length, test dimensionality, sample size, and the correlations between generating thetas for each dimension. Results indicated that the two approaches had similar scoring accuracy, although the two-step approach had better statement parameter recovery than the direct approach. Implications, limitations, and recommendations for future MFC research and practice are discussed.

Included in

Psychology Commons

Share

COinS