After laying into the NY Times, NY Daily News, and NY Post for publishing teacher ratings based on sketchy data reports and causing a host of negative effects for teachers, parents, principals, and students on Friday, I am glad to be able to recognize the thoughtful stance taken by other important NYC education media outlets.
Gotham Schools, an online news source about NYC public schools with an active community of commenters, originally joined the other journalists in requesting to see the teacher data reports in 2010. But they have chosen not to publish those reports with individual teachers’ names attached.
Here is some of their reasoning behind the choice (full post here):
The fact is that we feel a strong responsibility to report on the quality of the work the 80,000 New York City public school teachers do every day. This is a core part of our job and our mission.
But before we publish any piece of information, we always have to ask a question. Does the information we have do a fair job of describing the subject we want to write about? If it doesn’t, is there any additional information—context, anecdotes, quantitative data—that we can provide to paint a fuller picture?
In the case of the teacher data reports, “value-added” assessments of teachers’ effectiveness that were produced in 2009 and 2010 for reading and math teachers in grades three to eight, the answer to both those questions was no.
We determined that the data were flawed, that the public might easily be misled by the ratings, and that no amount of context could justify attaching teachers’ names to the statistics. When the city released the reports, we decided, we would write about them, and maybe even release Excel files with names wiped out. But we would not enable our readers to generate lists of the city’s “best” and “worst” teachers or to search for individual teachers at all.
It’s true that the ratings the city is releasing might turn out to be powerful measures of a teacher’s success at helping students learn. The problem lies in that word: might.
Inside Schools, an online site providing information and ratings on NYC public schools for parents, has also elected not to include ratings on teachers from the data reports because they state: these ratings are not helpful to parents in finding the best schools for their children. They published a very thoughtful and straightforward explanation for parents of why these ratings “don’t tell much.”
I found their explanation by Meredith Kolodner so helpful even to me, that I am copying it here:
1. The ratings are based on exams that state officials have said are invalid.
The reports being released are from 2007, 2008, and 2009 before state officials altered exams that they said were not a reliable indication of whether or not students were learning. The exams were only testing a small part of what students were supposed to know, and it was easy to predict what would be on the exam each year. So students who were drilled in a narrow set of skills might do well and their teacher might be rated highly. Teachers who were teaching the whole curriculum and not focused on test prep could be rated lower, even if their children had in fact learned more.
2. Test scores alone don’t tell you how effective a teacher is.
The exams don’t measure a student’s critical thinking skills, creativity or if she works well with others, three things that many of the best teachers emphasize. Although the DOE tried to account for differences in student populations (such as poverty, disability, English language learner), it is not at all clear that they were able to measure all of the differences between classrooms that could affect scores.
3. The margin of error on the ratings is huge.
The DOE admits that a teacher whose rating is 50% (or about average) could actually have a rating as low as 25% or as high as 75%. Even though a teacher is assigned a score, the report includes a range of possible scores because the DOE acknowledges that the reports are imprecise.
4. Teachers of children performing well on the exams could be rated poorly.
The ratings are based partially on student “progress” on the exams. So if students score 3.8 out of 4 one year and then drop to 3.6 the next year, a teacher could be rated ineffective. This is true even though a 3.6 is considered above grade level, and the difference in the score could be a matter of getting one or two questions wrong.
5. The ratings are not stable from year to year.
If the measure was accurate, you would think that a good teacher wouldn’t get a 20% one year and an 80% the next. But an Annenberg study found that a third of the English teachers who got the top rating in 2007 sank to the bottom of the pile in 2008. The same was true for 23% of math teachers.
While the NY Times has published many pieces on the various reasons why the data reports are misleading, inaccurate, unhelpful, etc., they still chose to publish them. They chose to engage teachers in a process of publicly defending themselves against a flawed metric, the effects of which no number of op-eds can undo.
Thank you to Gotham Schools for your ethical stance on use of this information and thank you to Inside Schools for focusing on what’s actually helpful to students and their families—and for breaking it down so well.
[image credit: infosecisland.com]