Two Oakland schools pilot new teacher evaluation system

This piece was originally published on Katy Murphy’s education blog with the Oakland Tribune.

In the last session of the OEA/OUSD teacher conference last Saturday, I sat in a session about a new teacher-evaluation system piloted by two Oakland schools. Like my own school, these two are under interdict from the state and federal education authorities to dramatically remodel themselves because their test scores remain unsatisfactory.

The schools applied for, and received, a federal grant to help them with their remodeling. One of the conditions for the money is to revamp their teacher evaluation system so that student achievement data is included. Additionally, the new system will have to include provisions for teacher improvement, reward, and removal.

The panel talking about the evaluation system included teachers, principals, and district personnel in charge of school transformation.

The proposed teacher evaluation system used by these two schools includes six components:

  • Data-driven Planning and Assessment
  • Classroom Learning Environment
  • Instruction
  • Professional Responsibilities
  • Partnerships, Family & Community
  • Student Achievement Data

The first five standards are not a huge departure from the California Standards for the Teaching Profession that are currently used for teacher evaluations throughout Oakland and much of the state:

  • Engaging and Supporting All Students in Learning
  • Creating and Maintaining Effective Environments
  • Understanding and Organizing Subject Matter
  • Planning Instruction and Designing Learning Environments
  • Assessing Student Learning
  • Developing as a Professional Educator

Both the current and proposed evaluation systems include a detailed rubric, or graph, that describes various levels of teacher achievement in each of the components.

Not Consistent with Standards — Level I
Developing Beginning Practice  – Level II
Maturing Beginning Practice — Level III
Exemplifies the Standard — Level IV

Perhaps the biggest difference and possibly the biggest obstacle to the piloted evaluation system is its inclusion of student-assessment data. This brought up several questions for me, such as:

  • Who will decide what student assessments are included in a teacher’s evaluation?
  • How will be student data be weighted versus the other five components?
  • What happens when the student-assessment data disagrees with the rest of the evaluation?

I was happy to hear from the panel that at the two pilot schools, the teachers will get to choose what student assessments they will be evaluated with.

I’m not patently against using student achievement data to judge teachers. Helping students learn is the role of a teacher. I’ve even written before about an assessment that Oakland Unified uses that I would be willing to stake my career and reputation on. I would not be willing to stake my career on the California Standards Test or the California High School Exit Exam. With those two tests, I am very concerned with the secrecy surrounding the questions and assessments, the nature of some of the questions, and how tests like these are harming the very learning they purport to assess.

A second major difference between the current and piloted evaluation systems concerns the frequency, duration, and personnel involved with teacher observations. Under the current system, one administrator does all of a teacher’s observations. There are two planned and one drop-in observation. While the observations are supposed to be extensive, in practice, they are very brief, sometimes lasting a little as ten minutes.

Under the proposed system, both a principal and a coach do observations. Each assessor does one long (30+ minute) and two short (15-20 minute) observations. In all, a teacher would be observed six times by two people. Additionally, each teacher would be the subject of a student survey and a teacher survey asking them to evaluate their instructor and colleague, respectively.

I think this component–the surveys–will be another sticking point for the proposed observation system.  My next post will focus specifically on that point.

Related categories: