The Hechinger Report is a national nonprofit newsroom that reports on one topic: education. Sign up for our weekly newsletters to get stories like this delivered directly to your inbox.

D.C. Schools 'value-added'
Aaron Pallas

Last Friday, Michelle Rhee, Chancellor of the District of Columbia Public Schools, announced the firing of 165 teachers, based on their performance on IMPACT, the District’s spanking-new performance assessment system. IMPACT has several components, depending on a teacher’s classroom assignment. All teachers are evaluated according to a Teaching and Learning Framework, and there are other components as well. I’m particularly concerned about Group 1, general education teachers in grades four through eight, for whom 50 percent of the IMPACT score is based on an individual value-added measure – that is, a measure of a teacher’s contribution to the learning of his or her students over the course of a year. According to Bill Turque of The Washington Post, 26 of the 165 teachers who were fired fall into this category. An additional 737 teachers were rated “minimally effective” and given one year to shape up or be terminated next year.

This was one of the first instances of the use of value-added measures for high-stakes personnel decisions, but it won’t be the last. Last fall, New York City Mayor Michael Bloomberg decreed that students’ test scores would be used in making tenure decisions for New York City teachers this year – and the Teacher Data Reports, New York’s version of value-added assessment, were central to the guidance provided to principals about whether a teacher should be presumptively denied tenure (in the case of a low value-added score) or presumptively granted tenure (in the case of a high value-added score).

Moreover, many states, seeking to improve their chances at winning a share of the $4.3 billion Race to the Top honey-pot, fell over themselves to enact legislation that would link teacher evaluations to student test scores, and would expand testing to the early elementary grades and high school grades so that more teachers could be evaluated on the basis of their students’ test performance. This in spite of a substantial body of scientific evidence and expert opinion urging caution in using student test scores for making high-stakes decisions about teachers.

Among the key critiques: the tests were neither designed nor validated for use in evaluating teachers; a teacher’s location in the value-added distribution can bounce around from one year to the next, or from subject to subject; students may be assigned to teachers’ classrooms in ways that invalidate the assumptions of the value-added models; a teacher’s value-added score may be sensitive to which student and school factors are included or omitted in the model; and the value-added models produce estimates that are riddled with error or uncertainty, yet are often treated as though they’re precise measurements.

What is troubling to me is that, to date, districts using these complex value-added systems to evaluate teacher performance haven’t made the methodologies known to the general public. New York City’s Department of Education has produced Teacher Data Reports for several years running, but technical reports on the methodologies used haven’t been released to the public. Not so serious when these tools are being used for internal diagnostic purposes, perhaps, but there is an important set of policy goals that are compromised when these methodologies are not fully disclosed.

When performance evaluation systems are used to make high-stakes decisions about public employees, it is critical that the agencies developing these systems hold themselves accountable to the public for these decisions by making the methodologies available for public scrutiny. It is not enough for agencies to say that the methodologies are sophisticated, scientific or complex, or that they’ve outsourced the development of the systems to private contractors (e.g., Mathematica Policy Research, in the case of the D.C. Public Schools, and Battelle Memorial Institute and the Wisconsin Center for Education Research, in the case of the New York City schools). Yes, the methodologies involve complex statistical calculations, and most members of the public will have neither the expertise nor the inclination to judge their adequacy. But there should be independent assessments from individuals and organizations not in the employ of the agencies, and this can only occur if the relevant technical materials are made available to the public. Anything less is a grim caricature of the ideals of accountability and transparency.

Aaron Pallas is Professor of Sociology and Education at Teachers College, Columbia University. He has also taught at Johns Hopkins University, Michigan State University, and Northwestern University, and served as a statistician at the National Center for Education Statistics in the U.S. Department of Education. Pallas writes the Sociological Eye on Education blog for The Hechinger Report.

The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn't mean it's free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

Join us today.

Letters to the Editor

7 Letters

At The Hechinger Report, we publish thoughtful letters from readers that contribute to the ongoing discussion about the education topics we cover. Please read our guidelines for more information. We will not consider letters that do not contain a full name and valid email address. You may submit news tips or ideas here without a full name, but not letters.

By submitting your name, you grant us permission to publish it with your letter. We will never publish your email address. You must fill out all fields to submit a letter.

  1. Some time down the these terminations in DC or. NYC will be argued before an arbitrator, I would love to call Prof Pallas as as my expert witness.

  2. excellent commentary; I wish someone in power were listening. FYI, the same group that supposedly helped develop the nonsensical IMPACT system, Mathematica, just released a critique of value-added teacher evaluations, showing a 25-35% chance that an average teacher would be seen as a poor one. Throwing dice with our teachers’ jobs, particularly in our highest need schools, is not a formula to attract the best and the brightest; nor the ones who really want to devote themselves to teaching as a career.

  3. I understand that IMPACT is imperfect at best but what I’d love to see rather than commentary slamming the ideas that folks like Ms Rhee are coming up with is HOW TO EFFECTIVELY DO THE TEACHER EVALUATIONS? It is always easy to slam others ideas but what isn’t easy is coming up with solutions to fix our ‘fair at best’ national education system and a big part of that is evaluating teachers just like for profit businesses evaluate their employees. It isn’t sufficient to say that it isn’t possible to do it correctly but quite frankly that is all that is available in mainstream media.

  4. Not sure that firing them is the answer- how about helping them to grow and develop into better teachers? Employees seem to be seen as disposable these days. Firing should be a last resort. There is not enough detail here to judge whether this is reasonable or not, but that is an awful lot of teachers to let go at once. I find it disturbing.

Submit a letter

Your email address will not be published. Required fields are marked *