Value-added models: Help teachers or VAM-oose

Wrapping_paper-equations

I don’t understand the complex algorithms used to calculate value-added models (VAMs).  I might easier decipher the formula Matt Damon wrote on his mirror in Good Will Hunting.

I’ve read research on VAMs like “Beware of Greeks Bearing Formulas,”  and “Should Value Added Measures Be Used for Performance Pay?”   I’ve attended district training, talked to experts, and read my own score report as a teacher.  And yet the concept of value-added models continues to puzzle me; I hope for the moment when I understand the logic and see VAMs as a useful agent for improvement.

One common explanation of VAMs involves the Oak Tree Analogy.  The idea is simple and straightforward.  How can we measure the effectiveness of two gardeners (teachers) growing oak trees (students)?

According to the value-added logic, we’d need to consider a host of background factors like location, weather, and varietal of the tree.  Instead of calculating raw growth (test scores), VAMs use these factors, or variables, to instead predict the growth of each student based on how he or she performed in the past.  However, this analogy fails to capture a few critical factors other than the effect of the teachers/gardeners: the motivation, personality, and daily experiences of the student, independent of instruction.

Here in Hillsborough County, Florida, we use a model developed by the University of Wisconsin’s Value-Added Research Center (VARC) as a part of our Empowering Effective Teachers Initiative (EET).  The VAM factors student characteristics such as prior test scores, age group, ESE category, English Languge Learner status, prior attendance, mobility, and population density.  Results are factored into the 40 percent portion of our teacher evaluations that is based on student growth.

I received my first value-added report this year, and I was crestfallen.  The scores, while respectable, told me little more than the pre- and post-test measures included in the calculations.  Each child’s information showed two test scores, one from the post measure in my classroom and one from the prior year.  Beside each name was a rating from zero to five stars.  The report included my total score.  That’s it. But the star ratings did not tell me each student’s VAM-predicted growth target, nor did it say if students actually met them.

There was also no data to disaggregate and improve my instruction.  I expected information that would help me discern patterns among groups of students.  Do I need to focus on English Language Learners more?  Are students from a certain demographic outperforming others in my classes?

I felt like a confused student.  I thought, what effective teacher would give students a grade without first showing them a rubric with clear expectations?  Who would give them a grade with no feedback on how to improve it?  Many districts, including my own, are already using value-added data when making high-stakes staffing decisions.

So, when VARC contacted me recently for some feedback on their model, I was elated not only to share on behalf of my colleagues, but also to learn.  Here was an opportunity to collaborate with developers in order to make this a reliable, helpful evaluative tool.

I had a lot of questions for the VARC folks:

  • What can teachers do with this data?
  • Will teachers see more disaggregated data in future reports?
  • How is a VAM reliable when pre- and post-test measures don’t align or allow students to demonstrate higher-level mastery, i.e., “sufficient stretch” (Eckert & Dabrowski, 2010)
  • How do you account for team teaching and cases where school personnel are scored without actually teaching students?
  • What improvements to the model are you considering?
  • How do you influence districts to use VAMs for high-stakes decisions?

These questions led to an eye-opening discussion. I’ll share, in my next post, what I learned and where I now stand in the intersection of skepticism and hope.

 

Related categories: ,