Having just returned from a quick holiday with family where I intentionally avoided my email account and The Radical, I was jazzed to see the conversation brewing in the comment section of my What I REALLY Reject post. Thoughtful comments have been left by nearly a dozen readers that—as usual—are pushing my thinking about accountability in education.
One commenter—Matt Johnston, who writes a great blog that I like to read called Going to the Mat—left me a series of pretty challenging questions to think through. While I haven’t had a chance to tackle all of his thoughts, one has caught my attention for today.
In response to my suggestion that teachers be held accountable for documenting the impact of their instructional practices, Matt asks:
1. Define “impact” in the following statement “impact of the instructional practices that I’ve chosen to use in my classroom.” This is a critical definition in your plan, one subject to so many alternatives, like did the student learn the necessary curricula as defined? (If, so, how do we know?) Does impact mean the child liked coming to your class and was active in class? Are we to rely only on the assessments you give as certification that the student learned something in your class?
Matt’s question is timely and legitimate primarily because one of the most frightening realities in education today is that the quality of instruction can vary greatly across classrooms on the same hallway. Heck, even the very nature of the implemented curriculum can vary greatly from room to room!
This point stood out to me most a few years back when I was teaching science for the first time. Having little experience with the curriculum, I was left to guess at what content to stress in class, how long to spend on units, and which objectives would pose my students the greatest challenge. While I had a curriculum guide and a textbook, my decisions were somewhat random and unstructured. As colleagues spent months on energy, my students were absorbed in a study of the solar system.
Someone had it wrong!
And to think that students at the same grade level working in the same school building were walking around with vastly different bits of knowledge from the same curriculum is nothing short of embarrassing to me. Worse yet, I’m pretty certain that every teacher on my hallway was rated “Above Average” or “Exemplary” on our year-end evaluations.
How is that possible?
So I definitely agree that any model of accountability designed to hold teachers “accountable” for the “impact of their instructional practices” certainly has to provide structures that ensure the delivery of a standardized curriculum to every student in a school. Teachers clearly can’t become complete “free agents” when it comes to selecting content.
What’s more—and this may surprise you—I’m not opposed to the use of external measures of performance that are uniformly administered to all students in a school or a district. In fact, I understand the role that standardized tests can play in giving educators—and community leaders—feedback about the performance of their students.
Where I think we’ve gone wrong is in placing complete emphasis on standardized tests as a method of gauging teacher performance. By doing so, we (unwittingly?) incentivize instructional practices that are dumbing down delivery in our classrooms. When all that I am held accountable for are the numbers that come back after testing each June, I can guarantee you that my work is going to focus on producing results on those tests.
What would happen, instead, if teachers—in conjunction with their administrators and using data sources including community feedback surveys, attendance numbers, classroom administered assessments, peer observations and end of grade test scores—developed an action plan for reflecting on instruction in their classroom over the course of a school year.
And then, what would happen if we used work towards completing that action plan—instead of end of grade test scores—as the primary tool in teacher evaluation and compensation decisions?
Would we find teachers developing more sophisticated understandings of their students and the communities that they served? Would individual teachers and schools start to develop a catalog of instructional practices that were most effective with different subgroups of students?
Would we avoid the trap of teaching to the test because we were choosing to place emphasis—both in evaluation and compensation—on instruction?
Let me give you an example:
As a teacher, I have a professional “hunch” that engaging students in Paideia seminars is an effective way to increase student motivation and to engage students in higher-order thinking skills. My hunch is based on a few books and articles that I’ve read—as well as the reaction that my students have had to a few of the seminars that I’ve been brave enough to conduct in class.
Under my (admittedly rough) proposal, I would sit down with my administrator at the beginning of the year and explain that investigating the impact that Paideia had on student achievement and motivation was an area of great professional interest for me. Together, we could design a course of study, so to speak. That course of study might include observations of other teachers conducting seminars, reading more books and articles on Paideia and attending appropriate professional development sessions.
The plan would also include a schedule for introducing Paideia in my own classroom. Perhaps the first few months of the year would be spent investigating and observing, the next few months would be spent implementing seminars and the last few months would be spent perfecting the process and documenting results.
Every plan would have to include a detailed description of the documentation that teachers were going to collect to record the results of their study. Teachers may choose to use end of grade test scores as a primary measure of the impact that their instructional practice had on student achievement. They may also decide to conduct surveys with students or to reflect on formative assessments delivered during the course of the year.
In my Paideia example, I would probably choose to study my students’ abilities to answer the evaluative and interpretation questions that are included on a district formative reading assessment (read: multiple choice test) that we give every three weeks. If my “hunch” that Paideia will engage students in higher order thinking is correct, it should naturally translate to higher scores on similar questions on these (automatically graded) reading assessments.
I’d also probably collect student feedback surveys simply because discovering a highly motivating instructional strategy at the middle school level is like striking gold! More importantly, while I’m highly motivated by Paideia seminars, I’ve never bothered to ask my kids what they think of them. I might be wasting tons of time by pursuing a strategy that is just plain boring to twelve year olds.
Throughout the course of the year, the administrator who was evaluating my performance would regularly touch base with me to see what kind of progress I was making towards completing my action plan. Instead of observing just any lesson, they’d likely want to see me deliver a Paideia seminar. As we engaged in post observation conversations, they’d likely direct their questioning towards the documentation that I was forming around the success or failure of Paideia as an instructional strategy.
At the end of each year, each teacher would be responsible for submitting a report on the instructional practice that they’d researched that included a description of the strategy, research that supported the strategy, and documentation of the impact that the strategy had on student achievement. Teachers would be expected to draw conclusions about the strategy and make recommendations about its use in other classrooms across the building.
I guess my argument is pretty simple—-even though it took forever to explain! Our current system of “accountability” is encouraging teachers to focus only on skills that prepare students for end of grade tests…and that’s leading to poor instruction in thousands of classrooms that is tacitly accepted because “the results” look right.
Let’s redirect the focus of our nation’s “accountability” efforts—and of teacher evaluation and compensation decisions—on encouraging professionally responsible practices such as the deep reflection on instruction and documentation of results that characterizes our most accomplished educators.
By doing so, I’ll bet that teaching and learning improves pretty quickly—-and that standardized test scores will rise too!
Ain’t that the win-win situation we’re all looking for?