Graduation Year
2013
Document Type
Dissertation
Degree
Ph.D.
Degree Granting Department
Educational Measurement and Research
Major Professor
Robert F. Dedrick
Keywords
Educational Accountability, Structural Equation Modeling, Teaching Effectiveness, Teaching Quality, Validity, Value-Added Scores
Abstract
Scores from value-added models (VAMs), as used for educational accountability, represent the educational effect teachers have on their students. The use of these scores in teacher evaluations for high-stakes decision making is new for the State of Florida. Validity evidence that supports or questions the use of these scores is critically needed. This research, using data from 2385 teachers from 104 schools in one school district in Florida, examined the validity of the value-added scores by correlating these scores with scores from an observational rubric used in the teacher evaluation process. The VAM scores also were examined in relation to several variables that the literature had identified as correlates of quality teaching as well as variables that were theoretically independent of teacher performance.
The observational rubric used in the validation process was based on Marzano's and Danielson's framework and consisted of 34 items and five factors (Ability to Assess Instructional Needs, Plans and Delivers Instruction, Maintains a Student-Centered Learning Environment, Performs Professional Responsibilities, Engages in Continuous Improvement for Self and School). Analyses of the psychometric properties of the observational rubric using confirmatory factor analysis supported the fit of the five-factor structure underlying the rubric. Internal consistency reliabilities for the five observational scales and total score ranged from .81 to .96.
The relationships between the observational rubric scores and VAM scores (with and without the standard error of measurement (SE) applied to the VAM score) were generally weak for the overall sample (range of correlations = .05 to .09 for the five observational scales and VAM with SE; .14 to .18 for the five observational scales and VAM without SE). Inspection of the relationship between the VAM and total observational scores within each of the 104 schools revealed that while some schools had a strong relationship, the majority of the schools revealed little to no relationship between the two measures that represent a quality/effective teacher.
The last part of this research investigated the relationship of the VAM scores and scores from the observational rubric with variables that had been identified in the literature as correlates of quality teaching. In addition, relationships between variables that the literature had shown to be independent of quality teaching were also examined. Results indicated that VAM scores were not significantly related to any of the predictor variables (e.g., National Board Certification, years of experience, gender, etc.). The observational rubric, on the other hand, had significant relations with National Board Certification, years of experience, and gender.
The validity evidence provided in this research calls for caution when using VAM scores in teacher evaluations for high-stakes decision making. The weak relations between the observational scores of teachers' performance and teachers' value-added scores suggest that these measures are representing different dimensions of the multidimensional construct of teaching quality. Ongoing research is needed to better understand the strengths and limitations of both the observational and VAM measures and the reasons why these measures do not often converge. In addition, teacher factors (e.g., grade level) that can account for variation in both the VAM and observational scores need to be identified.
Scholar Commons Citation
Güerere, Claudia, "Value-Added and Observational Measures Used in the Teacher Evaluation Process: A Validation Study" (2013). USF Tampa Graduate Theses and Dissertations.
https://digitalcommons.usf.edu/etd/4678