A quarterly magazine of urban affairs, published by the Manhattan Institute, edited by Brian C. Anderson.
• • • • • • • • • • • • • • •
Compstat for Teachers
Public schools can join the same data revolution that transformed urban policing.
14 July 2009
As City Journals Heather Mac Donald has argued, New Yorks most important policing reform in the 1990s was Compstat. This revolutionary data system tracked crime precisely, allowing the New York Police Department to focus its efforts on the most troubled neighborhoodsand to hold precinct captains accountable when things went wrong. Good information is a powerful tool not just in policing but in many policy areas. In fact, a similar data revolution has been brewing in education for about a decade, and President Barack Obama and his secretary of education, Arne Duncan, are now pushing it. The use of administrative data sets has the potential to improve public schools as dramatically as it cleaned up Gothams streetsif only well let it happen.
Large administrative data systems were a natural by-product of the movement toward increased student testing in the United States. As more and more states began administering their own tests to students, the scores needed to be collected and the data maintained. Some states and districts went the extra mile and created data systems capable of tracking the performance of individual students over time. And the most sophisticated systems also match students data to their teachers, enabling researcherswith the aid of powerful statistical toolsto identify the influence that each teacher has on student academic performance. To a layperson, there may be nothing less interesting than volumes of test-score data inside mainframe computers, but these systems have enormous potential to improve the way we evaluate the quality of our teachers.
The current system of teacher evaluations, moreover, is clearly flawed. Teachers are usually observed for a single class period once or twice a year, and in some states, senior teachers undergo evaluation just once every three to five years. In theory, these limited observations are supposed to reveal whether teachers performance has been satisfactory throughout the year. In practice, they are rubber stamps. The nonprofit New Teacher Project analyzed teacher evaluations in 12 large school districts across four states and found that in districts using a binary evaluation system (the only ratings being satisfactory and not satisfactory), over 99 percent of teachers received the thumbs-up rating. Even districts that used broader evaluation distinctions ranked 94 percent of teachers in one of the top two tiers and deemed just 1 percent unsatisfactory.
No honest person can argue that all teachers are performing up to par. In fact, the homogeneous results of teacher evaluations are completely at odds with a wide body of teacher-quality research conducted using administrative data sets. This research consistently finds vast differences in teacher effectiveness both across and within schools. We all know that there are good teachers and bad teachers out there, but we dont distinguish between them in any meaningful way. As Secretary Duncan recently lamented to a group of education researchers: In California, they have 300,000 teachers. If you took the top 10 percent, they have 30,000 of the best teachers in the world. If you took the bottom 10 percent, they have 30,000 teachers that should probably find another profession, yet no one in California can tell you which teacher is in which category. Something is wrong with that picture. Hes right.
Currently, 21 states have data systems capable of matching teachers to students. Duncan has pledged to use his discretionary funds under the federal stimulus package to get more states to do the same. It seems like a no-brainer. After all, whos against having more information?
The teachers unions, thats who. Theyre fighting hard against the adoption of these systems precisely because the information they reveal is so useful. The unions insist, against all evidence and logic, that no meaningful variation exists in teacher quality. Further, in a clear case of making the perfect the enemy of the good, they argue that because test scores are a limited measure of student proficiency and statistical models for evaluating teacher quality are imperfect, the information that data-system analyses produce for individual teachers are not ready for prime time.
As always, the unions have gotten far by coupling their dubious arguments with their overwhelming political influence. Not long after advocates for a new data system suggested that it be used to inform teacher-tenure decisions in New York City, for example, the state legislature explicitly banned its use for that purpose. (The city can still use its data system to develop unofficial teacher evaluations, which are distributed to principals.) Not to be outdone in legislative insanity, California has made it illegal to link electronically the states student test-score data set with a data set identifying individual teachers.
The first step to improving teacher quality in the United States is to measure it accurately. Expanding administrative data systems to all states would improve our ability to identify which teachers are effective, which need assistance, and which should be shown the door. To be sure, these measures arent perfect and shouldnt be used in isolation to make employment or compensation decisionsbut they can certainly be used to inform them. Those interested in education reform should join the push to increase the use of teacher and student data.
Marcus A. Winters is a senior fellow at the Manhattan Institute.