A Comparison of Alternate Approaches to Creating Indices of Academic Rigor

Document Type

Task Force Report

Publication Date



Graduation Rate, High Schools, Academic Standards, College Preparation, College Graduates, Educational Quality, Difficulty Level, Scoring, Multiple Regression Analysis, College Entrance Examinations, Grade Point Average, Academic Persistence, Scores, Models


In recent decades, there has been an increasing emphasis placed on college graduation rates and reducing attrition due to the social and economic benefits, at both the individual and national levels, proposed to accrue from a more highly educated population (Bureau of Labor Statistics, 2011). In the United States in particular, there is a concern that declining college graduation rates relative to the rest of the world’s population will reduce economic competitiveness (Callan, 2008). As such, in addition to research on how to increase educational performance in elementary and secondary schools, educational researchers are also interested in the determinants of performance and persistence at the collegiate level.

One method hypothesized to promote increased college graduation rates is to raise the standards in the nation’s high schools in order to better prepare students for college. Indeed, data from many converging sources suggests high school graduates are not prepared for higher-level college curriculum (Achieve, 2005). As such, many state institutions have attempted to set standards for rigor in order to ensure students are prepared for college study. Against this backdrop, the College Board has recently developed a measure of academic rigor, termed the Academic Rigor Index (ARI), for the purpose of examining how well a student is prepared for college study both within and across broad content domains (Wiley, Wyatt, & Camara, 2010). The ARI awards 0 to 5 points in each of five areas (English, mathematics, science, social science/history, and foreign/classical languages) based on students’ self-reported course-taking and sums these to create an overall index on a 0–25-point scale. The 25 credited activities are drawn from a larger set of course-taking variables. Each individual credited activity’s inclusion in the index is empirically supported by links to subsequent collegiate performance. The decisions to award an equal number of possible points in each of the five areas, and to weight each area equally in computing the total score, were not empirically based, and thus the degree to which relaxing the equal point per area and equal weight per area constraints could improve the predictive power of the ARI is not known. The purpose of the present paper is to compare the ARI with alternative scoring procedures that remove these constraints.

Was this content written or created while at USF?


Citation / Publisher Attribution

A Comparison of Alternate Approaches to Creating Indices of Academic Rigor, College Board, 25 p.