Graduation Year

2022

Document Type

Ed. Specalist

Degree

*Ed.S.

Degree Name

Education Specialist (Ed.S.)

Degree Granting Department

Curriculum and Instruction

Major Professor

Nathaniel von der Embse, Ph.D.

Committee Member

David Putwain, Ph.D.

Committee Member

Eunsook Kim, Ph.D.

Keywords

classification, cut scores, usability, validation

Abstract

Standardized testing is an integral part of the English and American education systems. The objectives of these tests are to evaluate students, teachers, and schools. However, this evaluation has unintended consequences, one of which is test anxiety. Over the last 50 years, there has been an increase in studies on test anxiety because of the widespread use of standardized tests (Hembree, 1988; von der Embse et al., 2019). However, two areas that have seen little attention are the measurement invariance of test anxiety scales across demographic groups, and the creation of classification standards for these test anxiety scales to increase usability. Examining measurement invariance is needed to determine if assessments measure the same constructs across groups. The lack of research in this area makes it unclear whether groups truly differ in severity of test anxiety or if the measurement tools themselves are flawed. Additionally, many test anxiety instruments are created for research rather than practice and lack evidence for classification standards or cut scores. This study seeks to address these limitations by examining the MTAS for measurement invariance across gender, age, and socio-economic status and by examining the differences between cluster groups identified through a Latent Cluster Analysis. The data used in this study was collected in the 2019 – 2020 school year from 918 students attending secondary school in England. The analyses that will be completed are a Confirmatory Factor Analysis to determine measurement invariance and a Latent Profile Analysis to determine classifications.

Share

COinS