Two weeks ago, I clung to a series of ropes three stories above the ground, frozen by my fear of plummeting to the pavement. Students pointed their camera phones up at me, cheering me to conquer the Sky Ropes obstacle course and my acrophobia. The place resembled an iron and hemp jungle. But I only heard the cacophony of cars buzzing past Tampa’s Museum of Science and Industry. I thought, haven’t I jumped through enough hoops as a teacher?

The course attendant assured me the harness system would hold me while my students swung and scampered through the array as effortlessly as spider monkeys. He took his time reassuring me as I hugged a pillar like it was my mom. “Take one step at a time,” he said. What timely advice.

In my last post, I prefaced a recent conversation with the University of Wisconsin’s Value-Added Research Center (VARC). As I write this follow up, I see an eerie parallel between my day on the ropes and my evolving opinion on value-added models.

Like the Sky Ropes course, I’m told VAMs work, despite the fact that they also have complex designs. But I cautiously ask the same question of VAMs that I asked of the ropes: Are they safe just because someone says so? Shouldn’t we conduct more testing first?

After speaking with the folks at VARC, I’m convinced VAMs can indeed be a safer bet than teacher evaluation tools of yesteryear, but there’s room to improve before they have any part of staffing or compensation decisions.

Classroom value-added reports are designed to help teachers reflect on possible areas of strength and improvement. VARC is working on analytics to present more comprehensive, transparent data to teachers. In Tulsa, another district using their model, school level reports break down scores nicely by subgroups such as ELL status. You can access quite a bit of clear, useful information about how Tulsa uses and explains VAMs to teachers here.

VARC also works with districts to ensure alignment of quality assessments. Of the 700 tests used in the Hillsborough value-added model, 261 were flagged by VARC for quality control discussions with the district. These control measures are positive signs. Such conversations should lead to stronger assessments for students, valid VAM data, and reliable reflective tools for teachers. However, school-level data may be better reflection points than individual classroom data until those VAM assessments are further developed.

There are two things VAMs do absolutely right – they compare students to their own prior performance, and they aggregate a teacher’s data over the course of several years. This is a far more logical approach than the AYP process of comparing a school’s test scores from year-to-year, failing to acknowledge the apples-to-oranges dilemma of comparing different groups of students.

I’m still a bit unclear on how value-added developers are influencing district policies. VARC partners with Battelle for Kids, a non-profit organization that handles VAM communication and professional development in Hillsborough County among other places. But if they’re letting districts know the models need tweaking, who is making the high-stakes decisions to leap before they look?

One solution is to slow down and collaborate with VAM developers to ensure districts have the most reliable tools to evaluate their teachers. Let’s show stakeholders how specific teachers are using the data effectively. Why not conduct a study building on the Measures of Effective Teaching project, where a multitude of teacher observation and student survey data is compared with the value-added scores of those teachers? Then follow the teachers’ instructional reflections and improvements? Maybe then we can pinpoint patterns of quality instruction and see how those strategies correlate to higher value-added scores.

I’m hopeful because real conversations are taking place. Value-added developers are asking for teachers’ opinions. It’s an important step. And as the Sky Ropes attendant wisely advised me, “Take one step at a time.”

 


Share this post: