Soon, our first TeacherSolutions group here at TLN will release a report on our study of performance pay for teachers. We looked at the history of this issue, the many failures along the way, the objections, the rationales, and most important, the benefits (if any) for students and teachers. One reason the performance- or merit-pay issue has been revived recently is the proliferation of standardized testing in K-12. This glut of test data has spawned statistical models which attempt to measure the “value-added” by an individual teacher or a school to a student’s level of academic achievement.
Many in the general public, as well as state, and federal politicians, believe that these value-added calculations can tell us which teachers are truly effective in raising student achievement. They believe that if they can tie teacher salaries to these value-added calculations, good teachers will be rewarded, and the others will be motivated to do better.
I have some real concerns about applying value-added formulas to student achievement tests, especially at the secondary level. I once asked William Sanders, the man who is credited with developing the best known value-added model, how these calculations could be done with high school students, since they are not tested annually under NCLB. He said high school students’ achievement levels are determined by looking at their ACT or SAT scores (usually from the junior or senior year); then extrapolating backwards to the last time they took a major standardized test in those subjects (math/reading). For many students that’s 8th grade—for some it might be a high school “exit” exam.
Admittedly, I’m no statistician, but how can those types of numbers be used in any reliable way as a measure of what an individual teacher may have done to help a specific group of students? This concept denies the cumulative aspect of education. It ignores the truth that multiple factors impact the learning and retention of that learning among students. For example, the work of a highly effective vocational teacher may be the trigger that helps students advance in reading or math, but the work of those teachers is never considered in these value added formulas, only that of the teachers in the tested subject areas.
Moreover, students develop and mature as learners over time. A student may have been introduced to a concept or skill in 6th grade, had it reinforced in different ways by different teachers over several years, then in 10th or 11th grade that concept [seemingly] suddenly took root and the student actually assumed ownership of the knowledge as evidenced by a deeper understanding and ability to apply the concept. Such “seeding” and “harvesting” occurs repeatedly over the course of any student’s educational career. Which individual teacher would get the “credit” for these accomplishments? The one who originally introduced the concept? The one who nurtured it along the way? Or the one in whose classroom the student happens to be when it matures? Add to this the educational influences to which students are exposed outside of the schoolhouse (media, activities, conversations, even…books!), and we have an infinite number of possibilities.
Learning does not develop in a linear pattern, or even in layers. Often, it develops in a spiral, with students appearing to move backwards in an area before leaping forward. We see this in writing classrooms all the time. We also see it when students move from one level to another, for example from elementary to middle school, or middle to high school.
For all its technical garnish, use of value added formulations really strikes me as oversimplifications of the teaching and learning processes, and as an insult to the complexity of teaching.