Misattribution of Teacher Value Added
July 22, 2014 09:00 AM
Misattribution of Teacher Value Added, by Umut Özek and Zeyu Xu, American Institutes for Research, was originally presented at the 2013 Fall Research Conference. The federal Race to the Top competition provided significant impetus for states to adopt “value added” models as a part of their teacher evaluation systems. Such models typically link students to their teachers in the spring semester when statewide tests are administered and estimate a teacher’s performance based on her students’ learning between the test date in the previous school year and the test date in the current year.
Due to data limitations in many states, however, the effect of most student learning experiences between two consecutive tests cannot be distinguished from, and is often mistakenly attributed to the value-added of teachers in the spring classrooms. This study examines how teacher evaluations are affected by such misattribution, and explores methods that can provide the best approximation in the absence of more detailed data. Results indicate that misattribution might result in considerable bias for both reading and math teachers, leading to many teachers being mistakenly labeled as ‘effective’ or ‘ineffective.’ In the midst of the current move towards teacher-level accountability and as many states design their teacher evaluation systems as mandated by RTTT, these findings might provide valuable and timely information for policymakers and the wider education policy community.
Download the full paper [PDF] from APPAM's Online Paper Collection.