I’ve spent the better part of the past two days working through Atul Gawande’s Better: A Surgeon’s Notes on Performance. In the process I’ve discovered that as a teacher, I share an unexpected kinship with obstetricians.
You see—just like teachers—obstetricians have traditionally been underappreciated by other professionals in the same field. As Gawande explains:
“Doctors in other fields have always looked down their masked noses on their obstetrical colleagues. They didn’t think they were very smart—obstetricians long had trouble attracting the top medical students to their specialty—and there seemed little science or sophistication to what they did.” (Kindle Location 2317-21)
But just like teachers—especially those working on collaborative teams—obstetricians have committed themselves to collective inquiry around—and systematic implementation of—best practices.
“In obstetrics…if a new strategy seemed worth trying, doctors did not wait for research trials to tell them if it was all right,” Gawande writes, “They just went ahead and tried it, then looked to see if results improved” (Kindle location 2327-2301).
This commitment to collective inquiry in obstetrics largely began with one woman—Virginia Apgar.
Apgar—a brilliant doctor convinced to move into anesthesiology early in her career because women had little hope of being accepted in the surgery wards of the 1950s—loved working in delivery rooms. The energy of birth and the joy of new life was inspiring to her.
She was troubled, though, by the seemingly callous treatment that many babies received at birth. Any imperfection—struggles with breathing, poor coloring, being small—could result in a rushed judgment that a child was too sick to live and a new life would be left to quietly die.
What bothered Apgar the most was that these life and death decisions were being made based on nothing more than a doctor’s general impressions about the likelihood that a child would survive given its condition after birth.
For Apgar, making choices about a child’s future based on nothing more than impressions seemed morally wrong.
So in 1953, she developed and published the Apgar Score—an indicator of newborn health that is still used worldwide today.
Designed as a repeatable procedure that could be conducted by nurses and doctors alike, Apgar’s test requires medical professionals to make simple observations of a baby’s color, crying, breathing, heart rate, and limb movements at one and five minutes after birth.
Those simple observations in and of themselves were an improvement over current practices which resulted in doctors giving up on seemingly unhealthy babies quickly.
More importantly, those simple observations resulted in new stories of survival. Babies that medical professionals would have once given up on were surviving, showing dramatic improvements in the first minutes of life.
Those simple observations also touched on the competitive streak in most doctors, who began to try to find new ways to intervene on behalf of babies with lower Apgar scores.
Doctors learned that warming and oxygen were simple strategies for improving an infant’s condition in the early minutes of life. They also learned that epidural anesthesia—instead of the general anesthesia that had been the norm in deliveries—led to higher Apgar scores.
Knowledge about successful strategies for saving infants who once would have been lost spread rapidly through the obstetrics community, resulting in rapid advances in neonatal care.
Statistically, the rates of infant mortality prove that Apgar’s efforts made a dramatic impact: In the 1950s, one in thirty babies died at birth. Today, one in five hundred babies dies at birth.
What lessons can collaborative teams learn from obstetrics?
Here are four:
Our choices about children need to be based on something more than the general impressions of classroom teachers.
If you have read the Radical for any length of time, you know just how much I hate our efforts to quantify everything there is to know about the kids in our classrooms—and just how passionate I am about the knowledge and skills of classroom teachers.
But many well-intentioned teachers can fall into the trap of believing that their general impressions about the students in their classrooms are the only evidence that they need in order to make responsible choices.
“Johnny has no chance,” we sometimes think about struggling students. “I’ve seen a million kids just like him.”
That reliance on impressions only—the same professional hubris demonstrated by the obstetricians in the early 1950s—can result in children who fail simply because we stop fighting on their behalf.
When we pair quantifiable evidence with our impressions—something that collaborative teams do every time that they give common assessments—we are far likelier to find ways to save more of our students.
Our efforts to quantify what we know about our kids don’t need to be overly complex or sophisticated.
The good news in Apgar’s story is that collecting “quantifiable evidence” doesn’t have to be a complex process that requires intensive data analysis skills.
Heck, Apgar’s rubric for the health of a child measured five basic indicators on a 2 point scale. “Baby’s crying. Check. Baby’s limbs are moving. Check. Baby’s color is pink. Check.”
The key to the success of the Apgar test doesn’t rest in the complexity of the indicators. Instead, it rests in the approachability of the indicators.
ANYONE can apply the Apgar test. It’s not time consuming. It’s not complicated—and as a result, it’s not avoided or overlooked.
For learning teams, that means our initial efforts to quantify what we know about our kids do not NEED to be sophisticated, validated, or approved by research gurus.
Instead, our initial efforts to quantify what we know about our kids just need to (1). encourage careful observation and (2). be simple enough that every teacher can implement them without excuses and/or extensive additional training.
As best practices are identified, they must be replicated.
Perhaps the most important lesson that can be learned from obstetricians is that as new lessons are learned about successful practices, they are implemented with fidelity across the profession.
It is no accident that giving birth in Chicago looks a lot like giving birth in Chattanooga.
While obstetricians are constantly improving their practices based on evidence, they are also committed to learning from their peers and establishing standards of care based on practices that are proven.
For learning teams, this lesson is equally important.
While standardization of practice should never be the primary goal of any learning community—rigidly scripting curricula strips innovation and experimentation out of our profession—we should be ready to replicate the practices that we know are working with our students.
When we ignore evidence of successful practices in favor of personal preferences, we are failing as professionals.
The most successful practices are practices that can be implemented by every practitioner.
Equally interesting is that obstetricians are committed to identifying and adopting practices that can be reliably implemented by all practitioners—instead of practices that can be implemented only by a small handful of obstetrics superstars.
Take forceps deliveries for example: While research shows that forceps deliveries are far less invasive for both mothers and newborns, they have been largely abandoned in challenging pregnancies for Cesarean sections (Gawande, 2007).
The reasons obstetricians have abandoned forceps are relatively simple: Forceps may be safer and less invasive in the hands of a highly accomplished practitioner, but in the hands of a novice, they can be deadly (Gawande, 2007).
Deliveries by Cesarean section, on the other hand, are far simpler to master. What’s more, the actions taken by obstetricians during a Cesarean section can be observed and coached by mentors standing alongside the birthing table (Gawande, 2007).
That’s important for collaborative teams that are committed to improvement to understand. Our goal shouldn’t be to collectively identify practices that are beyond the ability of most of our peers.
Instead, our goal should be to find ways incrementally improve practices that are within reach of every teacher. Large scale improvements across entire teams are otherwise impossible.
Any of this make sense? How successful have your schools been at devising systems for collecting evidence on student success and then at developing instructional practices that work?
Which of these core actions is your learning team struggling with? How are you going to change the work that you are doing together to overcome those hurdles?