There's been a heavy emphasis of late on teacher evaluation, with states and districts making it a pillar of their efforts to rethink tenure, pay, and professional norms. States and districts have adopted systems that rely heavily on observational evaluation to complement or stand in for value-added metrics. In many cases, they are turning to celebrated edu-consultant Charlotte Danielson's "Danielson Framework for Teaching." Just last week, Danielson was in New York City with NYCDOE chief academic officer Shael Polakow-Suransky to discuss NYC's reform efforts (NYC is using Danielson's framework as it designs new teaching standards). The Consortium on Chicago School Research is currently in the midst of a two-year review examining the adoption of the Danielson Framework in Chicago. The first report, released last year, termed the Danielson Framework "a reliable tool for identifying low-quality teaching" and said it "has potential for improving teacher evaluation systems." In light of all this, I thought it worth chatting with Charlotte about some of the ins and outs of teacher evaluation and what cautions or advice she might have for practitioners or policymakers.
Rick Hess: For context, can you say a bit about where the Danielson Framework came from?
Charlotte Danielson: It's an outgrowth of work I was part of at ETS on Praxis 3 [in the late 1980s]. Praxis 3 was an observation-assessment of first year teachers for the purpose of a continuing license. In order to do that, ETS had to commission a lot of serious research as to what is good teaching. I got hired to be part of the project because they realized if you wanted to have live observations of teaching, you had to have trained observers. Which is a no-brainer, but I was the only person who had actually developed training programs.
I could see that there was a need for [observational evaluation] beyond first-year teachers. We've seen what happens when people get National Board certification--the preparation you do for it, it was valuable professional development. It struck me that the same philosophy could apply if we had clear standards of practice for regular teachers. That's what caused me to write the framework... I wrote this book and didn't have a clue that anything would ever come of it, I just did it because I thought there was a need. It came out of assessment but I didn't see it as a framework for assessment, I just thought it was good for understanding practice. ASCD published it, and they made it a member book, and so it got sent out to about 90,000 people.
RH: When was this?
CD: It was published in 1996. And then I started getting emails and calls from teachers all over the world thanking me for writing the book, and saying, "Now we have our new teacher evaluation system." And I had to break the news to them that actually they did not, because in order to have an evaluation system you needed a whole lot of other things--like procedures, training, and you need to make a lot of decisions. A system is more than just your evaluative criteria and level of performance. Before that, I'm not aware that anybody had created a rubric for teaching. We had rubrics for student learning, and we realized that if you're going to assess student performance in complex learning, you needed a rubric--and it wasn't going to be about right or wrong, but a continuum of performance. And I thought that's teaching; it's complex performance.
RH: When you work with districts employing your framework, what do you see that gives you confidence they're using it well?
CD: Let me give you a story of when it's not done well. I was contacted early on by a large urban district in New Jersey that...had a horrible evaluation system. It was top-down and arbitrary and punitive and sort of "gotcha." And they developed a new one based on my book, and it was top-down and arbitrary, and punitive. All they did was exchange one set of evaluative criteria for another. They did nothing to change the culture surrounding evaluation. It was very much something done to teachers, an inspection, used to penalize or punish teachers whom the principal didn't like...[and] I discovered that if I didn't do something here, my name would get associated with things people hate.
So I thought about what it would take to do teacher evaluation well. And I discovered that doing it well means respecting what we know about teacher learning, which has to do with self-assessment, reflection on practice, and professional conversation. And when you do those things, you have enormous growth... [because] people appreciate the opportunities to talk in-depth about the challenges of practice, and it becomes a vehicle for professional learning instead of just a ritual you go through.
RH: In general, how faithfully have schools and districts applied your framework?
CD: I don't have a valid answer to that question because I don't know what goes in the numerator or the denominator. Up until now, there weren't a lot of people who were just adopting this thing whole-scale without a lot of assistance from me or one of my consultants. So I think reasonable fidelity was pretty high because they had some good coaching. But now, I have absolutely no control over it and I don't try to be a policeman--I've never thought that was productive. But, even if I wanted to, I don't think I could. People need something, so they are grabbing something and this looks as good as everything else. So there is a potential for this to be used badly, absolutely.
RH: What can you do to help ensure that your framework is used thoughtfully?
CD: We do training. I've developed some online training programs with online vendors, so when people use those they at least hear me talking, but I don't even know how well they implement those things. There are a lot of unknowns here.
RH: If states or districts are using these systems at scale, it creates an enormous need for people who can do these evaluations well. How big a concern is that?
CD: People evaluate teachers now, and we've found that it doesn't really take any longer to do it well than to do it poorly. But it does take longer to do it well than to not do it at all. You do need boots on the ground to do this, but it doesn't have to always be administrators--it can be department chairs or supervisors. For teachers in good standing, they don't have to do a comprehensive, formal evaluation every year--they do it every other year, or every three years, and the other years teachers engage in rigorous, self-directed inquiry.
With video technology, you can do a lot of this remotely, and that's very powerful. So there are other options, but it is labor-intensive. And to the extent that the public does not trust educators to do evaluation well--and it hasn't always been well done, historically, and we have plenty of teachers not teaching well and schools not doing anything about it. So the policymakers have a point. But just more inspection isn't the answer--it seems to me the answer is high-quality teacher evaluation. And that's not impossible to do, we know how to do it, but there is a school-level capacity problem. It takes training, and in order to evaluate teachers well you need a good three or four days of training.
RH: Are you working at all on this question of ensuring that observers have the training to do high-quality, consistent observation?
CD: I'm doing some work with Teachscape. And we're developing a proficiency test for observers, which is a requirement that has been written into law in a couple places, including Illinois and New York. They are saying, "If you're going to evaluate teachers in this state, you've got to pass a test." Now, they aren't specifying what that test ought to be, but I don't know anyone else trying to develop a test. But we are, and it's far down the track. It should be available in mid-October.
RH: In places like DC and Florida, policymakers have required the use of observational evaluations to help make decisions about job security and compensation. What's your take on such efforts? Do you have suggestions or cautions that apply?
CD: My experience with those issues is mixed. School districts have an absolute obligation to ensure quality teaching. The question is what counts as evidence, and how do you attribute evidence to the teacher. That's why the assessment of teacher practices, we'll always have to have that. Partly because it gives you diagnostic information--if things aren't going well, if kids aren't learning, then why not? But the net result is you have to have student learning.
On the question of observation and if it's productive, how high are the stakes if a rating is given? A lot of the policy types, they want a number. And this stuff doesn't lend itself to numbers. But the minute a teacher's performance rating is a high-stakes matter, people are going to do whatever they have to do to be rated highly. And the things you have to do to be rated highly are exactly the opposite of things you'd do if you wanted to learn--you wouldn't try anything new, you would be protective, you would be legalistic about the ratings, and you'd argue. None of that makes you open to improving your teaching. So my advice is to only make it high-stakes where you have to. If someone is on the edge of needing remediation, then that is high-stakes and you should use it. But if your main purpose is to say these 80 percent of our teachers are performing pretty well, so let's use this process to get better, that's a very different way of thinking.
RH: Right now, we're see widespread efforts to use observational frameworks as high-stakes tools. Are you suggesting that that's a concern?
CD: What I hope people guard against is, so long as practice is above a certain level, then it shouldn't be high-stakes. If you aren't going to fire the person, then what's the point? Some people who are driving this policy have a "get rid of the bad apples" mentality, but I'm [not sure there are sufficient replacement teachers out there]. If we assume that most of these teachers right now are still going to be on staff in five years time, then the challenge is how do we get better? And that entails very different procedures and a different culture than it does if your goal is to smoke out the bad apples.
RH: If you have one bit of advice for those seeking to do observational evaluation well, what is it?
CD: The first thing to do is to arrive at consensus around what is good teaching...Having a shared and common understanding about what is good teaching is important. Ask teachers, what does this look like in my classroom? If you do nothing else but that, you'll improve because a lot of other things fall into place. That is, if you know what good teaching is, then how will you know it when you see it? How do you evaluate it? But that conversation shouldn't be shortchanged.
RH: What do you say to policymakers who fear that sounds like a recipe for foot-dragging?
CD: You pay a month or two to understand the instruments. Call it training. Call it whatever you want, it's people understanding the criteria on which their performance will be judged. And that, of course, is a fundamental principle of equity, that you don't evaluate people on something they don't know. Having this conversation gets people on board, and I've never had it not work. A criterion of something worth doing doesn't have to be that the teachers don't like it! And to hear all these [reformers] talking, you'd think that was their criterion.
receive the latest by email: subscribe to frederick m. hess's free mailing list