Graduation Year

2020

Document Type

Dissertation

Degree

Ph.D.

Degree Name

Doctor of Philosophy (Ph.D.)

Degree Granting Department

Curriculum and Instruction

Major Professor

Robert Dedrick, Ph.D.

Committee Member

Yi-Hsin Chen, Ph.D.

Committee Member

John Ferron, Ph.D.

Committee Member

Sarah van Ingen Lauer, Ph.D.

Keywords

accountability, confirmatory factor analysis, content validity index, data literacy, data-based decision making

Abstract

Student achievement data have been the cornerstone of state and national accountability efforts for decades, and the focus on data-based decision making and evidence-based practices in education from policymakers, the public, and researchers continues to increase. Underlying the various pressures and incentives on educators to use data is a basic logic model: if teachers use data, their practice will change, and those changes will lead to improved student outcomes. The simplicity of the model belies the complexity of the practice of data use. Many individual- and organizational-level factors, such as attitudes and beliefs, competence, supports, and context play a role. Both in research and in practice, there is a need for validated measures that can consistently estimate data use in practice. The Teacher Data Use Survey (TDUS) is a customizable self-report instrument developed by Wayman, Wilkerson, Cho, Mandinach, and Supovitz in 2016 that aims to measure five components of the conceptual framework describing teacher data use that undergird the survey: actions, organizational supports, attitudes towards data, competence in using data, and collaboration. While the results of the pilot study conducted by the developers offered preliminary evidence of the soundness of the measure, the validity evidence provided was limited in scope. The purpose of the study was to build upon the pilot study by addressing three specific sources of validity evidence: content, internal structure, and relationships with three conceptually related constructs: teachers’ educational level, the Data Driven Decision-Making Efficacy and Anxiety Inventory (3D-MEA), and schools’ accountability context. Data for the study consisted of item ratings from a six-member expert panel review and from TDUS survey responses from 331 teachers who instruct elementary students in a large, diverse Florida public-school district. The Context Validity Index (CVI) values for the scales ranged from .50 to .92. Internal consistency reliability was high for each of the scales, with Cronbach’s alphas ranging from .87 to .95. The fit of the confirmatory factor analysis models for the various subscales examined was generally acceptable, with some conceptual ambiguity noted for three items. As hypothesized, correlations between TDUS Actions with Data scales and schools’ accountability context were positive and strong (r=.70, p=.01 and r=.89, p<.01). Correlations between TDUS Data Competence scale scores and scores on the 3D-ME (efficacy) items were also as expected (r=.83, p<.01), as was the correlation between Data Competence scale scores and scores on the 3D-MA (r=-.50, p<.01). While the correlation between the TDUS Data Competence scale scores and teachers’ educational level and items on the 3D-MA (anxiety) were in the expected direction, they were weak and non-significant. Potential uses and limitations of the TDUS and study delimitations and limitations are discussed.

Share

COinS