Grading Teachers in Los Angeles
Value-added measurement shows that many of the city’s teachers don’t belong in the classroom.
It’s the start of another school year, and parents everywhere are asking themselves: Is my child’s teacher any good? The Los Angeles Times recently attempted to answer that question for parents. Using a statistical technique known as “value added”—which estimates the contribution that a teacher made to a student’s test-score gains from the beginning to the end of the school year—the paper analyzed the influence of third-, fourth-, and fifth-grade teachers on the math and reading scores of students in the Los Angeles Unified School District. The results suggest a wide variation in the quality of L.A.’s teachers. The paper promises a series of stories on this issue over the next several months.
The Times has admirably highlighted the importance of using data to evaluate teacher performance, confirming the findings of a wide and growing body of research. Studies show that the difference between a student’s being assigned to a good or bad teacher can mean as much as a grade level’s worth of learning over the course of a school year. While parents probably don’t need studies to tell them who the best teachers are—such information is an open secret in most public schools—academic research helps underscore the inadequacy of the methods currently used to evaluate teacher performance. Even the nation’s lowest-performing school districts routinely rate more than 95 percent of their teachers as satisfactory or higher.
Teacher evaluations yield absurdly positive results because they’re not tied to objective measures of performance. The current system relies on classroom observation, a thoroughly subjective measure. Tenure protections ensure that poorly rated teachers can’t be removed even when they receive poor performance reports. The result? Principals everywhere hand out positive evaluations to undeserving teachers.
Researchers have worked for years to develop statistical techniques capable of measuring a teacher’s independent influence on student proficiency while accounting for the advantages and disadvantages that students bring with them into the classroom. Value-added is to date the most sophisticated methodology. U.S. education secretary Arne Duncan and his boss, President Obama, support using the value-added metric to assess teachers, and states and school districts across the nation are turning to it to develop new teacher-evaluation tools. Washington, D.C.’s school system has put such a plan into action: last month, the district fired 26 teachers, in part based on poor value-added scores.
Still, value-added analysis is only one piece of a system that would effectively assess teacher quality. And while its promise is real, this form of analysis has limitations, too. As a statistical tool, test-score analysis is subject to random error. Even when the analysis is properly executed, some bad teachers will get high marks according to their value-added assessment and some effective teachers will score poorly. Test-score analysis is “correct” on average—it can tell us a great deal about aggregate teacher quality. It can also help to evaluate individual teachers. But given its messiness—especially when tied to stakes as high as people’s jobs—it cannot be used in isolation.
Critics go too far, however, when they claim that these limitations justify abandoning the value-added approach altogether. The real lesson is that test scores are best used to raise red flags about a teacher’s objective performance; rigorous subjective assessment should follow, to ensure that the teacher is truly performing poorly. If both analyses show that a teacher is ineffective, then action should be taken, including removal from the classroom.
The Times analysis makes clear that many of L.A.’s teachers just don’t belong in the classroom. That’s an important service and represents journalism at its best. However, the paper’s promise to create a public database linking individual teachers by name to their measured influence on student proficiency—a move lauded by Duncan and reportedly being considered by Washington, D.C., school chief Michelle Rhee—is worrisome. Publicly listing each teacher’s value-added scores would imply that test-score analysis is sufficient.
In fact, the Times’s story is itself an example of the right way to use value-added measurement. Along with reporting its overall findings, the Times observed and interviewed several teachers who posted very high and very low marks. Reporters’ classroom visits illustrated why most of those teachers received the scores they did. However, the reporters also uncovered an example of at least one teacher who was generally thought to be effective within her school but whose scores showed otherwise. Perhaps that teacher was a victim of random error—or perhaps she has been overrated within her school. A rigorous follow-up with real consequences can determine the truth.
Improving teacher evaluation is one of the most important steps in efforts to reform U.S. public education. It’s important that we get it right, for kids, parents, and successful teachers.
City Journal is a publication of the Manhattan Institute for Policy Research (MI), a leading free-market think tank. Are you interested in supporting the magazine? As a 501(c)(3) nonprofit, donations in support of MI and City Journal are fully tax-deductible as provided by law (EIN #13-2912529).