Graduation Year


Document Type




Degree Granting Department


Major Professor

Michael T. Brannick, Ph.D.

Co-Major Professor

Walter C. Borman, Ph.D.

Committee Member

Judith Becker Bryant, Ph.D.

Committee Member

Bill N. Kinder, Ph.D.

Committee Member

Stephen Stark, Ph.D.


convergent validity, criterion-related validity, job performance, metaanalysis, nomological network, personality tests, reliability


In recent years, meta-analytic reviews have estimated validities for the use of personality scales in the prediction of job performance from an array of empirical studies. A variety of personality measures were used in the original studies, and procedures and decisions concerning the categorization of these measures into Big Five personality factors have differed among reviewers. An underlying assumption of meta-analysis is that the predictors across included studies are essentially the same, as is the criterion. If this is not the case, then problems arise for both theoretical reasons and practical applications. If predictors that are not highly correlated are combined in a meta-analysis, then the theoretical understanding of antecedents and consequents of the predictors will be clouded. Further, combining predictors that are not essentially the same may obscure different relations between predictors and criteria, that is, test may operate as a moderator.

To meet the assumption of similarity, systematic methods of categorizing personality scales are advised. Two indicators of scale commensurability are proposed: 1) high correlations among predictor scales and 2) similar patterns of correlations between predictor scales and job-related criteria. In the current study, the similarity of the most commonly used personality scales in organizational contexts was assessed based on these two indicators. First, meta-analyses of correlations between scales were conducted. Second, subgroup meta-analyses of criterion-related validity were examined, with specific personality scale and criterion as moderators.

Correlations between criterion-related validity and certain sample characteristics were also conducted to determine if sample characteristics act as moderators of validity. Additionally, an examination of personality scale reliabilities was conducted.

Results reveal that assumptions of similarity among personality measures may not be entirely met. Whereas meta-analyzed reliability and criterion-related validity coefficients seldom differed greatly, scales of the "same" construct were only moderately correlated in many cases. Although these results suggest that previous meta-analytic results concerning reliability and criterion-related validity are generalizable across tests, questions remain about the similarity of personality construct conceptualization and operationalization. Further research into comprehensive measurement of the predictor space is suggested.