A stirring controversy exists with teacher evaluations using the value added measurements and the public’s right to know these results. After the L.A. Times printed a list of “effective” and “ineffective” educators, one expert questions the VAM methodology and the morality of printing potentially inaccurate findings.
The controversy over using value-added ratings to judge teachers in Los Angeles — which has been the subject of many blog posts, including my own August comments — continues unabated with the release of a new report from the National Education Policy Center, challenging the methodology used by an L.A. Times consultant and reporters to create a massive ratings database and publicly identify what the newspaper claims are ineffective teachers.
Last week, L.A. Times writer Jason Felch, the lead reporter on the project, attempted to trump the release of the NEPC report by breaking the embargo date requested by the Center. (Embargo dates are routine business for newspapers and newsmakers—a necessary partnership that assures fair and wide distribution of newsworthy developments.)
It is transparently clear that Mr. Felch, who was given an advance copy of Due Diligence and the Evaluation of Teachers, jumped the gun in an effort to reduce the report’s splash and to muffle growing concerns both about the methodologies used to make the analysis and the ways in which the L.A. Times has used the data. Felch and the Times have not only mined the data to produce dozens of news stories but provided public access to the database through an online search engine that rates individual teachers by name and ranks the top 100 elementary teachers and schools in the vast LAUSD system.
The NEPC researchers found many flaws in the analysis conducted by Richard Buddin, a senior economist at the RAND Corporation (working as an independent contractor). Both Felch’s series of stories and the database developed by Felch and his editors relied entirely on Buddin’s work to identify “effective and ineffective teachers.” Yet when the NEPC researchers did their own analysis and compared it with Buddin’s results, they found that, for reading, “only 46.4% of teachers would retain the same effectiveness rating under both models,” and for math, only 60.8%.
The NEPC determined that the Buddin/Times statistical approach was “producing biased estimates of teacher effects because it omits variables that are associated both with student test performance and how students and teachers are assigned to one another.”
You can read the NEPC authors’ meticulous description of their own methods and analysis for yourself. What’s particularly notable is their contention that the nature of the data called for a more conservative approach to determining teacher effects:
Because the L.A. Times did not use this more conservative approach to distinguish teachers when rating them as “effective” or “ineffective”, it is likely that there are a significant number of false positives (teachers rated as effective who are really average), and false negatives (teachers rated as ineffective who are really average) in the L.A. Times’ rating system.
While the NEPC researchers concentrated on what they perceived as flaws in Buddin’s methodology and conclusions, that doesn’t let Jason Felch and the L.A. Times off the hook. After claiming that teachers’ personal rights to privacy (and, indeed, fair treatment) were trumped by “the public’s right to know,” Felch and his editors had an obligation to vet and re-vet their methodology before playing that First Amendment trump card. We can now see they did not go nearly far enough.
Remarkably, when Mr. Felch broke the NEPC report embargo and reported on its findings, the L.A. Times headline proclaimed: “Separate study confirms many Los Angeles Times findings on teacher effectiveness.” Perhaps Felch and his editors were banking on the public’s aversion to reading reports about statistical methodology. But anyone who reads the NEPC executive summary will see that the newspaper’s choice of headline was self-serving and disingenuous — and
that’s putting it mildly.
In response, the NEPC researchers were quick to release a fact sheet about the Times story, challenging its interpretation of their work and bluntly stating that the August publication by the Times of teacher effectiveness ratings was “based on unreliable and invalid research.”
WHAT IS most troubling about all this brouhaha — beyond the very real damage that the Times’ reckless actions has done to teachers who did not deserve the treatment they got — is this: Results-oriented teacher evaluations are very much needed. I could not agree more with Rick Hess of the American Enterprise Institute that VAM can be used to evaluate teachers, but only “carefully.”
Back in August I noted that:
The Bill & Melinda Gates Foundation, under the auspices of its Measuring Effective Teaching project, is taking a very thoughtful approach to teacher assessment by looking at multiple measures of student achievement and linking other metrics (e.g., classroom observations, teachers’ analyses of student work and their own teaching, and levels of student engagement) to capture a more robust and accurate view of who is effective and why.
Very few accomplished teachers are likely to argue against better methods of determining who is effective and why. In a JUST-RELEASED paper from the Center for Teaching Quality, three teachers well-versed in the issues surrounding evaluation policy call for the strategic use of value-added data, with the VAM models’ limitations in mind. They strongly recommend that classroom experts be engaged to help sharpen these tools and their underlying student assessments, and by doing so, produce accountability systems that better support effective teaching and learning.
In a sensible society, concerned about the future of the children in its public schools, we would not — and I hope we will not — leave the evaluation of effective teaching to reputation-seeking journalists and their attention-seeking news organizations.