Side effects of value added measures

As the research has demonstrated, value-added measures are not yet capable of accurately measuring the full range of teacher effectiveness.  Proponents of value-added models continue to push for their improvement and adoption, some teachers would like to put a nail in the coffin of VAMs and call it a day. I’m not so pessimistic. Yes, I do believe that VAMs can play a role in teacher evaluation. But I don’t want to talk about that now. I want to discuss the side-effect of VAMs that makes me both hopeful and thankful: they have forced teachers to take responsibility for developing valid and reliable student assessment systems.

Ever since No Child Left Behind first put a renewed focus on test scores teachers, parents, administrators, and others have been bemoaning the effect they’ve had on schools: narrowing of the curriculum, teaching to the test, etc. More recently, many of my colleagues have been worried about how limited these test scores really are. In many cases standardized tests are the only measures of student achievement that statisticians have available to place in their value-added models. My teacher-led Washington New Millennium Initiative group wrote about it in our report. And so it was comforting to see our colleagues in the Illinois NMI express similar concerns in their report, “Measuring Learning, Supporting Teaching: Classroom Experts’ Recommendations for an Effective Educator Evaluation System”.

The report was written by fifteen teachers from across the state of Illinois. It was written to make recommendations on implementation of Illinois’s Performance Evaluation Reform Act (PERA). While doing so they succinctly outline the concerns that many of us have regarding standardized tests:

“In our experience with ISAT and Prairie State exams, test preparation emerges as its own component of the curriculum, as teachers and other school staff push to meet accountability goals—sometimes to the detriment of learning goals. Students become skilled at taking multiple-choice tests as opposed to engaging deeply with content and mastering higher-order thinking skills that are critical for success in college and 21st-century careers. Current assessments in Illinois are also generally acknowledged not to be too reliable or well-aligned with curricular goals and lacking in “stretch” to measure every student’s growth accurately, making them unsuitable as a basis for value-added models planned under PERA. Also, Illinois teachers currently do not receive student assessment results until the following school year—after those students have moved on to another teacher. Reporting delays make results useless as tools to improve instruction for our students now.”

But they then talk about how measures of student learning can effectively be used in evaluation:

“…we agree that well-designed assessments—administered formatively as well as summatively, aligned with curricula, focused on higher-order skills and with timely turnaround of results—can be useful tools to support effective teaching in every subject and grade. We encourage PEAC [Note: PEAC is Illinois’s Performance Evaluation Advisory Council, charged with determining how to implement PERA] to consider use of multiple measures of student assessment including portfolios, computer-adaptive tests, observations of student learning behaviors and oral examinations or presentations. Moreover, this assessment development process needs to begin with the educators. Teachers should be the ones taking the lead on development of these measurement tools in conjunction with other district, state and union leaders.”

It is clear that the drive to evaluate teachers based on student test scores is making us all take notice. And we as teachers now understand that we must be the ones taking the lead to make student assessment systems more valid and reliable – both for our students and for our development as teachers. So as states including Washington and Illinois revise their teacher evaluation systems to include student assessment data, as Common Core standards are adopted, and the SMARTER Balanced Assessment Consortium (one of two state consortiums developing assessment systems aligned to the standards) gets up and running, I believe we have value-added models to thank. That is, if value-added models hadn’t provoked such a reaction from educators then we might not have seen the substantive responses we are now seeing that are producing pathways to more viable assessment systems.

I’m still not convinced that VAMs are the way of the future – and if they do stick around I see them as being a single tool in the toolbox of teacher evaluation – but I’m happy with what they have wrought so far.

Related categories: