Assessment Formats Benefit Some More Than Others

This evening I stayed late at school grading my students’ post-it note responses to Sherman Alexie’s Absolutely True Diary of a Part-Time Indian. [My school has shifted to a standards-based grading system this year, so I now give two grades for each assignment–one for work habits/effort, and the other for achievement. Achievement grades are based on an IB Middle Years Programme rubric I modify for specific assignments.]  As I was assessing my 8th graders’ reading notes, I happened to also have a spreadsheet open on my computer with my students’ data from our most recent Interim assessment in reading, which included several passages with multiple choice questions. I hadn’t looked at it very carefully yet.  Out of curiosity, as I finished assessing each student’s responses to the novel, I compared the achievement grade on my rubric to the grade he or she received on the standardized, multiple choice reading test.

I had no idea what I’d find, but it turned out there was a fairly high correlation between the achievement grades (out of 10) I was assigning for post-it note responses to Alexie’s novel, and the percentage correct the same students had on the multiple choice reading interim.  There were three glaring exceptions (sample size so far is 20):

(1) More than one boy with very messy handwriting did better on the multiple choice test than on the post-it notes.  There could be many reasons for this.  Perhaps I was biased in my grading becasue I struggled so much to read what the student had written. Perhaps the student struggles with the motor skills of writing so much that he’s not showing his real abilities in the notes.  Or perhaps due to the same struggles, the student has some weaknesses in his writing that haven’t been addressed by teachers, because it is so difficult to actually see what they are.

2) Several students whose home language is not English did significantly better responding authentically to the novel than to the multiple choice questions.  I’ve always felt that multiple choice tests put ELL’s at an unfair advantage, because they put students in front of texts and questions that utterly lack context, which is one of the most important things for English language learners to be successful in their reading.  On top of that, they are tricky, and use logic as much as reading skills, but choosing the best answer may hinge on whether or not they understand a single word or word construction.  

3) One particularly strong-minded student who “hates tests,” and evidently did not put much effort into the Interim assessment.  He did well on the notes, but extremely poorly on the multiple choice questions.  Given his strong capability as a reader, I know his feeling about the test is what actually got measured that day. 

The disparity in the performance of all of these students on authentic vs. standardized assessments gives me plenty to think about. It reminds me how much standardized tests must not become accepted as any kind of definitive measure of student learning or teacher effectiveness. Furthermore, teachers should be trusted to assess our own students, using all the tools available to us.