I was writing a piece of my book about assessing students’ reading levels. I lean towards lots of informal assessment, through conversation and observation, as well as authentic assessments in projects, discussions, and my students’ reading notes. However, I appreciate the insight I get from conducting a formal reading assessment of my students. I’ve used QRI, DRA, running records, and Leveled Reading Assessments from Teachers College Reading and Writing Project.
Then I thought, Wow. The data from these assessments is so much more useful than the data from standardized reading tests. These comprehensive reading assessments help find the level at which our students can read independently and the level at which the student can read with instructional supports, and point us in the direction of what types of supports individual students need.
Standardized testing data, on the other hand, is far less detailed and applicable and throws in the wild card of multiple-choice. Selecting answers to multiple-choice questions is not a reading skill. Unless the questions and answer choices are very straightforward, testing only lower order thinking skills, we are actually testing a combination of reading skills and logical reasoning skills. Those two processes are not at all the same. Why are we mixing them up in one assessment? Maybe for adults or in high school, such as on the SAT’s, this is fair. But for children? When they’re still developing critical thinking skills? Hmm…
And what sort of reasoning gets tested through tricky multiple choice questions on reading passages? Why is it important? I can see a valuable skill in being able to make good choices from among several options. But in real life, the “best” choice is never the one some stranger/authority has already chosen for us, is it?
[image credit: www.nhwoodworking.com ]