Data dance

​This year, I’ve gotten more hands-on experience with standardized assessment. At my school, we give students practice state ELA Benchmark tests in September and February. The official state ELA exam is in May. Throughout the year I’ve had to create regular interim assessments that are similar in format and content to specific sections of the state test to monitor the students’ progress.

Last week I gave a Reading Interim, in which I pulled only fiction passages and their corresponding multiple choice questions from a previous year’s state exam. Because we’ve been reading and studying fiction all year, I was hoping to see some growth.

I was heartened to see that, in fact, after tons of work all year on reading and responding to fiction literally, inferentially and critically, my students made serious growth in their ability to answer multiple choice questions on random fiction passages correctly! (The class average jumped up about 30 points since September) I would not say that I have been “teaching to the test” much at all this year in terms of how I’ve approached reading instruction and literature studies. I now have some data that shows that my students’ authentic learning in reading has been able to translate to improved performance on a standardized test. PHEW!

Figuring out how to get useful data has taken a lot of time. I’ve learned to use SchoolNet, a program that allows me to create tests and link every test item to a state standard, electronically grade multiple choice questions, and collect and view the scores in a variety of ways. I can also create short answer or extended response questions; I just assign a point value for the question and bubble in a score for those questions on each student’s test.

In addition to learning the program, it turns out that learning to select or write your own test questions that will yield useful data is another skill I’ve had to work on. For example, in order to get an accurate picture of how well students seems to be doing on one standard, you need several questions that are tied to that standard. Otherwise, you may have a “bad” question–either too easy, too hard or unclear–and draw a false conclusion about students’ mastery of a standard based on one invalid question.

My first set of data was not particularly useful because I only had one or two questions on each standard and couldn’t conclude much from them. I also miscalculated the timing of the test. I did not give students enough time, so many of them didn’t complete the last two questions, which would result in a score of zero on two writing questions. This made it look as though students were failing miserably on the standard to which those questions were linked. This time, I was able to make necessary schedule changes and get help administering the new listening/writing test for a double period so time would not be an issue.

I am curious to see how I will feel about testing by the end of this year. At the moment, I feel pretty good about how I’m balancing authentic learning experiences and true intellectual development in my classroom with the need to prepare students for a standardized test at the end of the year. I hope I can keep that up.

What about my own time? Is learning to write a good Interim assessment a good use of my time? It seems better than giving an assessment I know nothing about. Am I assessing the skills that matter most? To be continued…

[image credit: logic.stanford.edu]