Something surprising happened in Texas during the 1990s: the public schools got better. This might not sound like such a big deal, but it is. Despite a tripling in real per-pupil spending, the nation's public schools have woefully under-performed for decades now. Student scores on national standardized tests have barely budged from the depths to which they had plummeted by the early 1970s; dropout rates remain alarmingly high, especially in dysfunctional urban schools. Texas public school kids, though, have been scoring dramatically higher on standardized tests, and more of them are staying in school—and minority students have shared in the success.

The best explanation for the Texas turnaround: the Texas Assessment of Academic Skills (TAAS), a mandatory, statewide test of public school students in reading, writing, and math, first introduced in 1990 but fully implemented in the mid-nineties under Governor George W. Bush, one of its fiercest advocates. The TAAS holds schools accountable for results: a school's performance on the test receives lots of publicity, so a poorly scoring school's principal and teachers often want to crawl under a rock. Chronic under-performance even raises the specter of state takeover, though it hasn't happened yet. Students have a strong incentive to learn, too: if they don't pass the TAAS tenth-grade exit exam, they don't graduate.

In response to the pressure, the schools seemingly have started to deliver. In 1994, only 53 percent of public school students passed the exit exam. In 1998, 78 percent did—a breathtaking improvement. The pass rate for blacks more than doubled, from 31 percent in 1994 to 63 percent in 1998. Hispanics' rate shot up from 39 percent to 70 percent.

Predictably, as Bush's 2000 presidential candidacy kicked into gear, a cottage industry sprang up to tell you that nothing special happened in Texas public schools on W.'s watch. The Texas "education miracle," the critics argue, is a mirage, and the TAAS is a deeply flawed test. While the naysayers aren't completely off base about TAAS's weaknesses—I was an early critic of the test myself—they're dead wrong about Texas's educational gains. The miracle is real.

Some of the arguments critics use to trash the TAAS are remarkably weak. Peter Schrag in the left-leaning American Prospect, for example, complains that the passing score for the TAAS exit exam is set too low. Similarly, reporter Jonathan Mintz in the Washington Post quotes a teacher who claims that "the TAAS math test taken for graduation could be passed by many fifth-graders." Maybe passing the test is too easy, but this still doesn't explain why students' scores have improved so much since 1994. Both Schrag and Mintz also refer to education researcher Sandra Stotsky's findings that the TAAS reading test has become easier over time. But Stotsky's research doesn't account for the remarkable spike up in pass rates for math—from 58 percent in 1994 to 80 percent in 1998.

A far more serious criticism of the TAAS—the one that made me most skeptical about relying on it—is that widespread cheating distorts its outcomes. When a government bureaucracy develops and administers a test to hold itself accountable, strong incentives invariably arise for the bureaucrats involved to manipulate test results to make themselves look good. No bureaucracy, I've long suspected, would squeeze a vise on itself willingly. Disturbing evidence surfaced that cheating was indeed occurring. In an unprecedented step, a Texas grand jury indicted the entire Austin school district for intentionally altering test scores to boost student pass rates. In Dallas, journalists reported that some teachers asked lower-performing students not to come to school on test day. The problem with such anecdotes, though, is that it's hard to know whether they're typical of widespread behavior.

That's why it's crucial to see if other evidence supports or challenges what the TAAS results seem to be showing. The National Assessment of Educational Progress (NAEP) is a good place to look. It's a highly respected, independent measure of educational achievement. Most important, there are no penalties to schools and staff in Texas if students do poorly on the NAEP, so there's no incentive for teachers to falsify results.

The NAEP shows that the improvement in Texas schools was sizable and occurred across the board demographically. The average math score of Texas 13-year-olds went from 258 in 1990 to 270 in 1996 on a 500-point scale, a 4.6 percent increase. Black 13-year-olds boosted their average score 5.2 percent, from 236 to 249. Hispanics increased their score 4.4 percent, from 245 to 256. Reading scores improved more modestly: between 1992 and 1998, fourth-graders' average score went from 213 to 217 (a 1.9 percent hike), with the performance of blacks and Hispanics staying mostly flat.

To put these scores in easy-to-grasp terms, in 1990, 46 percent of Texas eighth-graders demonstrated at least "basic" skills in math; by 1996, 59 percent did. In 1990, only 18 percent of black 13-year-olds in Texas scored at a basic or better level in math; by 1996, 31 percent did—still a poor percentage, but a lot better. Hispanics did better too: 30 percent scored at a basic or higher level in 1990; 42 percent did so in 1996.

On the NAEP reading test, the percentage of Texas students performing at a basic or higher level went from 57 percent to 63 percent between 1992 and 1998. The percentage of blacks reading at a basic or higher level dropped slightly, from 40 percent to 38 percent, but the percentage of Hispanics rose from 41 percent to 48 percent. Texas's black and Hispanic public school students ranked first and second among U.S. minority students on the 1998 NAEP writing section.

The critics who pooh-poohed the TAAS results also try to dismiss Texas students' performance on the NAEP. But here, they're just plain wrong or disingenuous. Jonathan Weisman in The New Republic misreports the test's outcomes. Math scores on the NAEP math portion "stalled," he claims—a false charge, as we've seen. He adds: "NAEP [reading] scores between 1992 and 1998 showed no statistical gain." The key word, however, is "statistical." The four-point reading-score gain does run into a sampling problem, reducing the confidence we have in it from the conventional 95 percent standard of statistical significance to a more modest 87 percent. But hanging one's argument on the fact that reading gains didn't meet the high threshold for statistical significance (while the math-score improvements clearly did) is like the mafia boss claiming he's innocent because no court has ever convicted him: it's technically true but not altogether persuasive.

When they aren't misreporting the NAEP results, the critics attribute success on the test to coaching—"all that drill and test practice," as Schrag sourly puts it in the American Prospect. But even if students are drilling and practicing, they're more likely to be doing it for the TAAS, not the independent NAEP, which carries with it no consequences, for good or for ill, for the schools. Also, while most testing experts agree that coaching can inflate scores on any given test, the improvement doesn't transfer easily from one kind of a test to another. While test preparation might explain some of the gains on the TAAS, it certainly doesn't explain the fat gains on the NAEP.

The Washington Post's Mintz dismisses the NAEP gains by changing the standard of success. The NAEP scores suggest that Texas hasn't "closed the achievement gap," he grumbles. In fact, he adds, the results suggest that "the gulf between the state's white and black" kids is widening. But the reason for this "gulf," of course, is that, while blacks, Hispanics, and whites all made significant gains, the white kids made the biggest gains of all. Rather than celebrating these across-the-board improvements, Mintz doesn't even report them. Perhaps he would have been more satisfied if white students just did worse, but I doubt that most sensible observers would exchange higher average gains for a smaller racial gap in scores.

A final attack on the Texas education turnaround charges that TAAS's higher scores have come at the expense of low-performing minority students, who have dropped out instead of taking the exit exam, driving up the average score of the remaining students. Schrag writes, "Walter Haney, an analyst at . . . Boston College, is certain that as reported TAAS scores have gone up, dropout rates have risen sharply, particularly for minority students."

I used to believe this argument. The state's official numbers, incredibly, claim that the dropout rate fell during the nineties from 6.1 percent in 1990 to 1.6 percent last year—numbers "too good to be true," as Schrag correctly observes. Recently, though, I checked to see whether the rates were rising or declining over time. To avoid relying on the ridiculous official numbers, I estimated the dropout rate by computing the ratio of eighth-grade students to high school graduates year by year. Absent a sudden spike in the number of eighth-grade students, this ratio gives us a good idea of how many kids make it from junior high to graduation.

I found that around 45 percent of students failed to complete high school in 1993; five years later, the percentage had dropped to 34 percent. While that's still much too high, the important point is that the dropout rate has significantly declined, while TAAS and NAEP scores have gone up. Nor did I find minority students dropping out at a higher rate than in previous years. In fact, while the dropout rate for Hispanics remained the same between 1993 and 1998, the rate for blacks actually fell.

Haney finds an increase in dropout rates because he compares the number of ninth-graders with the number of graduates—a longitudinal comparison distorted by the sharp increase in the number of ninth-graders in recent years. That's because many high schools—taking the TAAS seriously—are holding students back in ninth grade to give them an extra year of prep work before they take the TAAS tenth-grade exit exam.

The bottom line: even if you don't have confidence in the TAAS results or the officially reported dropout rates, other evidence makes it inescapably clear that Texas public school students are learning more and staying in school longer. Given the lack of significant national educational gains since at least the seventies, the Texas education record is impressive.

What accounts for Texas's success? Most of the reforms that teachers' unions and their fellow travelers propose are unlikely explanations. Take reducing class size, a perennial teachers' union bromide. The average number of students per teacher declined only slightly in Texas between 1993 and 1998, from 15.9 to 15.2. Per-pupil spending, adjusted for inflation, increased from $5,420 in 1993 to $5,655 in 1998, not enough to account for any difference in test scores. The percentage of teachers with advanced degrees actually fell (from 29 percent in 1993 to 25 percent in 1998), while the percentage of teachers with less than six years' experience slightly increased. Nor did the demographic characteristics of Texas students change significantly during these years.

The most obvious explanation is the TAAS itself. In many public school classrooms, especially in inner cities, remember, little teaching ever takes place. Teachers and students enter into an implicit bargain: teachers agree not to make students work, and students agree not to harass the teachers. Teachers who reject this bad deal and force students to learn have told me troubling stories about finding their car tires slashed or being threatened by disgruntled students.

Teachers, too, often embrace foolish progressive-ed nostrums that help them justify—to themselves and to others—the pitifully few hours they spend really teaching in front of a classroom. Some teachers believe that students learn better if they form groups and teach one another rather than listen to the teacher impart knowledge. Other teachers shun competition or testing for fear of damaging student self-confidence. Still others refuse to make students memorize their math tables or grammatical rules, dismissing such old-fashioned techniques as soul-destroying "drill and kill."

The main benefit of the TAAS, I believe, is that, by holding teachers accountable for imparting to their students basic skills in reading, writing, and math, it forces them to get up in front of a classroom and teach, the old-fashioned way. Supporting this view, a recent study from the liberal New York–based Edna McConnell Clark Foundation found that "unacceptable results" on the TAAS drove the Corpus Christi School District to firm up standards for students—requiring eighth-graders to complete Algebra 1, for example. True, an accountability test like the TAAS might be too easy or badly designed, or it might crowd out other legitimate school activities or crudely impose a one-size-fits-all approach on students, poorly serving kids who learn in different ways. You can come up with a host of good reasons to dislike such a test. But if it pushes public school teachers to make sure that students know how to read, write, and do arithmetic, it's worth stomaching the potential drawbacks.

Note that the TAAS only succeeded in Texas because the teachers' unions were flaccid and senior government officials—George W. Bush above all—kept the heat on. If the unions had been stronger, they might have been able to co-opt or thwart the test. And if senior government officials had been weaker in their determination to hold schools and teachers accountable, the disorganized resistance that educators in Texas did offer would have been enough to derail the test.

All this is interesting beyond the politics of the current presidential campaign. What Texas's example shows us is that, if you can pressure even existing public school teachers to teach, big improvements are possible. Holding teachers and schools accountable for the performance of their students is the best way to apply such pressure. Texas's educational miracle underscores the urgency of enforcing accountability on the public schools, something the teachers' unions are resisting tooth and nail in most states—a true conspiracy against the nation's children.


City Journal is a publication of the Manhattan Institute for Policy Research (MI), a leading free-market think tank. Are you interested in supporting the magazine? As a 501(c)(3) nonprofit, donations in support of MI and City Journal are fully tax-deductible as provided by law (EIN #13-2912529).

Further Reading

Up Next