So, I may be playing a familiar tune here, but I’m tired of being asked to work in new ways without being given the knowledge, skills, time and tools to complete new tasks.  It’s the whole reciprocal accountability soap-box that I jump on a dozen times a year.

In this case, I’m struggling with a common nemesis:  Using data to drive my decision-making.  Now, in theory, I completely embrace the thought of using concrete, tangible evidence of individual student learning at the skill level to:

  1. Identify students in need of enrichment and remediation.
  2. Identify and then amplify effective instructional practices across an entire hallway.

Makes perfect sense to me.  Fits the “work smarter, not harder” ethos that defines the best practitioners in every field.  Ensures that every child has a learning experience tailored to their individual needs.  Heck, I’ve even written chapters in two different Solution Tree anthologies on assessment!

But I’m also frustrated with the lack of efficient tools to collect, manipulate and disaggregate student learning data at the skill level.

You see, like many teachers, the vast majority of my efforts to collect and analyze student learning data continues to start and end with the sticky notes that I use as exit cards several days a week—-a decidely low-tech, cost-effective system when you look at the materials required for implementation but a miserable slodge when it comes to trying to make sense of learning trends and patterns across an entire team of students over a nine-week period.

Even the digital solution provided by my district to collect data and analyze learning results struggles to make my work more efficient.

Called Blue Diamond, the system has a ton of potential.  Common formative reading assessments are available for every middle grades student in every class in our 130,000 student district.  Delivered every three weeks, these assessments do a reasonable job focusing my instruction on the kinds of specific skills that students need to learn in order to be effective readers.

They’re short—so they don’t consume a ton of my class time—they’re automatically scored by a Scantron machine—giving me instant access to results—and they’re generally well written, so I believe in the learning trends that I spot.

Heck, I’ll even readily admit that until Blue Diamond was introduced in my district, my reading instruction was just plain appalling!  While I’m sure that my students loved to read, there was very little that was systematic about my teaching.  Blue Diamond changed all that by giving me a clear picture of exactly what it is that my students were supposed to be learning.

The problem is that spotting learning trends that I can act on is darn near impossible because Blue Diamond reports student performance at the objective—instead of skill—level.

Need an example of why reporting student performance at the objective level (which sounds pretty logical) is inefficient at best?

Check out the language of the three state objectives covered on the most recent Blue Diamond assessment that I gave to my students:

Reading, Grade 06 Objective 5.01
Respond to various literary genres using interpretive and evaluative processes

Increase fluency, comprehension, and insight (reading strategies; figurative language, dialogue, and flashback; plot, theme, point of view, characterization, mood, and style; distortion and stereotypes; underlying messages)

Reading, Grade 06 Objective 5.02
Respond to various literary genres using interpretive and evaluative processes

Study the characteristics of literary genres: fiction, nonfiction, drama, and poetry (novels, autobiographies, myths, essays, magazines, plays, pattern poems, blank verse; interpreting impact of genre-specific characteristics; exploring author’s choices; exploring impact of literary elements: setting, problem, resolution)

Reading, Grade 06 Objective 6.01
Grammar and language usage

Types of sentences, punctuation, fragments, run-ons; subject-verb agreement, verb tense; parts of speech, pronouns, prepositional phrases, appositives, dependent and independent clauses; vocabulary development (context clues, a dictionary, a glossary, thesaurus, structural analysis: roots, prefixes, suffixes); dialects, standard English

On the bright side—and I really do believe in the potential of this program as a tool for informing teachers—the reports features of Blue Diamond allow me to instantly generate a list of students who struggled with and/or aced each of these three objectives.  I can break those reports down by student subgroup, I can create new break points allowing for more careful sorting, and I can watch progress in each objective area over time.

I can compare performance between the different classes that I teach.  I can see how other classes are performing in our school and I can see how students across the district are performing on the same objective.  I can even find out automatically how many questions individual students answered correctly under each objective.

Pretty impressive, huh?

But because each objective covers about a dozen discrete skills, knowing mastery at the objective level is meaningless to me as a teacher!

If a child falls into the “needs improvement” section for Objective 5.01, I have no idea if it is because they’re struggling with point of view or characterization—two wildly different skills that require different kinds of remediation experiences—unless I go back to each identified child’s exams, manually look over their individual responses, and figure out the skills covered by the questions that they’ve missed.

(What are the chances that already overworked teachers are going to actually tackle that task?)

The good news is that this is a seemingly easy digital problem to fix.  In a world where tagging has become the default way to sort any kind of content on sites that contain heaping cheeseloads of information, every single question in our district’s formative assessment system could be quickly tagged with the discrete skill that it is designed to teach.

Then, when student reports are generated, teachers could see the questions that students were missing and the skills that each question was designed to assess.  In my wildest dreams, a tag cloud could be generated where the skills missed the most frequently were bigger than those students had no troubles with, providing visual cues that were quick and easy to pick up on.

Most importantly, though, data—which is now too vague to be meaningful—would become instantly useful because teachers could quickly develop remediation experiences that target the kinds of skills students haven’t mastered yet.

The bad news is that these kinds of changes are unlikely to happen in time to help me with the students in my classroom today.

Don’t get me wrong:  Our district is constantly improving Blue Diamond, so I believe that someday we’ll be able to access information at the skill level—but until then, I’m stuck aggregating sticky notes if I want to find actionable trends in student learning outcomes.

And I honestly worry about the consequences of asking teachers to be “data driven” while failing to provide them with practical tools that make data action possible.  Are we turning teachers off to data by promoting an idea that is professionally responsible but nearly unmanageable?

I don’t know the answers to my questions.  I don’t work with enough teachers to have a good perspective about the state of data-driven decision-making in our country.

I just know that each time I sit down to crunch numbers with antiquated or inefficient tools, my heart gets a bit harder towards the whole process—and that is a frightening outcome with real consequences for the kids in my classroom.

Share this post: