Mirror, mirror on the wall: Troubling reflections on assessment

In a fascinating, barely noticed commentary earlier this month in EdWeek, Heinrich Mintrop used a relatively simple study to raise the intriguing question:

“…are schools measured as high-performing by their accountability systems actually better schools? And could others learn from them what to do better?”

While popular logic screams, “Yes, of course!” Mintrop’s study suggests, “Maybe not.”

Admittedly, Mintrop’s was a small study, and even he noted that it needed to be conducted on a much larger scale. However, there are other pieces of evidence that are beginning to throw greater doubt on the conventional wisdom that has fueled NCLB and other such assessment-based accountability plans.

At the community college where I now teach, a team of 30 faculty members from disciplines across the campus led a two-year study of our writing program that reviewed the individual performance of over 2500 students over a five-year period, as well as looking at current student placement and completion information. Among our findings was this startling and disturbing correlation: Over the same period of time that our state had firmly established its testing program in language arts, which includes a benchmark writing test at the 4th grade, 8th grade, and a mandatory exit English exam for high school students, the performance of students in entry level college English courses significantly decreased!  In other words, although students were meeting the state standards for grammar and writing to move through the K-12 system, they were coming to college less and less prepared to succeed.

Our faculty have dealt with this on the ground level: Try explaining to students (and often parents) who were top performers in their school districts, even valedictorians or honor students, that they now have to take one or more remedial English courses before they can enter Freshman Comp I.

The obvious problem is that whatever skills the state tests are testing are not the skills students need to succeed in their college level course. That sounds like an easy fix—just align the curricula and the test across the K-16 spectrum, as many have advocated.

The not-so-obvious problem is that in fact the curricula themselves are already closely aligned. What’s out-of-sync are the pre-packaged tests used to measure student performance. Anecdotally, veteran teachers muse that students actually came to college better prepared when their secondary teachers were allowed to teach them how to write well, not how to write for a state test.

Just a fluke? Or are there others out there who are finding similar results?

Related categories: ,