Observations Part 3

Whine & Cheese

David Knuffke
AdminThoughts

--

Vision by Aldric Rodríguez from the Noun Project

This is the final piece in a series of posts about observing lessons. Readers may find it helpful to consult the first post for some background information, and the second post for the clinical details of my district’s observation cycle before they dive into what follows. But far be it from me to stop the bravest among you from forging boldly ahead

Let’s get it out on the page right up front: I have some problems with our current observation framework. This isn’t the same thing as me saying “it’s terrible,” or that observation to help determine a teacher’s continuing employment should not be a thing. But there are issues with the current process, and those issues are almost entirely due to one, overarching dynamic:

An ever-increasing loss of local control.

Diminishing local control of teacher observations is easily my biggest problem with the way they happen in New York State. For a variety of reasons, the NYS Legislature (working under the direction of the Governor) decided a few years back that schools needed even more legally mandated supervision than what the Federal Government had implemented for NCLB and RTTT compliance. This decision affected many things, including observations. NYSED’s RTTT compliance measures already required the district to use one of a series of “approved” observation rubrics. In itself, this was not a huge deal for my district, as we had moved to using the Danielson rubric (one of the sanctioned options) before any mandate. But even here, the impact of state mandates was already being seen, and pretty much every choice that NYSED has made since then has only the observation dynamics that I find most problematic.

Here’s a trivial example: Danielson rates its domains according to the following scale- Unsatisfactory, Basic, Proficient, Distinguished. NYSED mandates required us to change those four descriptors to Ineffective, Developing, Effective, and Highly Effective. I literally have no idea what the logic underlying that kind of a mandate is. How does the change accomplish anything worthwhile? To my linguistic sensibilities, the mandated language is less descriptive and a bit more negative than Charlotte Danielson’s original choices. “Unsatisfactory” teaching practices seem different to me than “Ineffective” ones. This is an admittedly minor quibble, but it’s an easy example of how NYSED’s requirements (as charged by NYS Government more broadly) don’t do anything useful for evaluations in functional school districts.

In case superficial concerns don’t sway you, here’s a much bigger deal: The current system requires us to misuse observations. More specifically, it requires us to use a tool that was designed to help teachers examine their practice and develop as professionals in an evaluative capacity. To put it in educator speak, we are using a formative tool for a summative process. I’m not aware of any place where Charlotte Danielson suggests that her framework for teacher evaluation was designed with an eye toward being used to judge a teacher to determine if a teacher should remain employed*. But that’s what NYSED requires us to do. This is classic “using the wrong tool for the job” thinking. And as is always the case, when you use the wrong tool for the job, the job doesn’t get done particularly well, and you usually damage the tool. The moment considerations of continuing employment are applied to rubric ratings those ratings are no longer interpreted by the rated teacher as anything much more than a series of boxes to check to continue providing for themselves and their families. This is the exact opposite of a culture that actually encourages the kinds of risk-taking, willingness to acknowledge weaknesses, and spirit of collaboration that a healthy school system should aspire to. It’s also totally counter to what we know about how to motivate people, and make them feel like they are valued. But none of that changes the fact that it’s a legal requirement of teacher evaluations in NYS.

Perhaps the worst dynamic of all is that current NYSED mandates pretty much prevent districts from changing their observation processes. Any modifications to observations in a district have to work through an APPR committee, and even then very little can be changed. In our district, this has resulted in the phasing out of our “professional studio” observation model where a team of four teachers participates in each other’s observation cycles with an administrator. It also stopped me from pursuing a video-based observation structure this year. In both cases, even though the alternative observation structures are widely supported as being more valuable for teachers than either our formal or walkthrough cycles are, they aren’t state-approved for our teacher evaluation process, and as such, they cannot “count” for teacher observations. So not only is NYSED requiring the misuse of the rubric, it’s also structurally making innovation in observation more difficult.

Concerns noted, let’s also point out that everything is not doom and gloom. There are mechanisms that districts can use to work within their mandates and still help teachers understand that they are valued and that taking risks is not going to endanger their evaluations. And I know from my own experience that many teachers don’t really care about what NYSED might suggest is the “way things need to be”, they follow a more true north when making their decisions about what needs to be done to teach children (a younger, teacher version of me decided to enshrine that thinking as the “JFT model” of education). Great teachers want to improve their craft, regardless of what processes “count.” But the fact that great people are working for kids doesn’t negate the mandated reality of the situation. It doesn’t make it any less possible for more autocratic minds to move into a system and start to destroy the relationships at work within**. And it doesn’t make anything that NYSED has required over the past decade any more necessary for the vast majority of districts in NYS. Functioning districts are not made any more so through this process. Morale declines, and the willingness to engage in an open spirit of professional development diminishes. Dysfunctional districts might see some improvement they might not, but we sacrifice some amount of the good works and efforts of the vast majority of working public systems to try to address the few that are struggling. At the end of the day, we’ll only really know by looking at the educational outcomes of children, which seems to me like a pretty big gamble.

* That noted, the Danielson Group did go through the process of submitting their rubric to NYSED for use in the current observation regime, so there must be at least tacit acceptance of it’s use for this purpose.

** While I have had the great fortune to have never worked in a district that has undergone such a regression, I know from my professional learning network, that these types of changes do occur, and that when they do it frequently doesn’t take much at all to cause them.

Thanks for reading. Strong stuff here, so I wouldn’t be surprised if you vehemently agree or disagree. Drop me a line if you’d like to let me know your thoughts. If you’ve found something of value here, consider supporting the site.

--

--

Writing about whatever I want to, whenever I want to do it. Mostly teaching, schools and culture.