The Institute for Education Sciences (IES) was established in 2002 as a quasi-autonomous agency within the U.S. Department of Education. One of its legislative mandates was to encourage the dissemination of scientific research to improve instructional practices in the nation’s classrooms. Yet IES officials have undermined that worthy goal by releasing a methodologically flawed and incomplete study of the federal Reading First program. The study found that students in a small sample of Reading First schools showed no greater improvement in reading comprehension than those in a similar group of schools that applied for the program but didn’t get federal grants. The IES’s poorly designed study, together with sloppy media coverage of its findings, will likely cause irreparable damage to Reading First—the only federal education program that requires schools receiving federal grants to adhere to instructional approaches backed by evidence and science.

Reading First was the Bush administration’s landmark initiative for raising reading achievement in grades K–3. The program, now in place at 6,000 low-performing schools with mostly poor children in all 50 states, was funded at $1 billion per year for six years. The chief architects of the 2002 legislation—Reid Lyon, the head of reading research at the National Institutes of Health, and Robert Sweet, who served on the staff of the House education committee—were confident that a national program that emphasized early mastery of decoding skills (including phonemic awareness, phonics, fluency, vocabulary, and comprehension) would lead to improvement in reading achievement, particularly for disadvantaged children. But the law’s authors also wanted to leave nothing to chance. So from its inception, the Reading First legislation required the Department of Education to carry out a scientifically rigorous and comprehensive evaluation of the new program’s effectiveness, and set aside a huge pot of money—up to $25 million per year for six years—to do so.

The assessment that IES has produced after six years, however, was neither rigorous nor comprehensive. “We gave them money for a Cadillac and they bought a Chevy,” says Sweet. (IES spent a total of just $30 million on the impact study.) Moreover, IES’s work on a research design was chaotic and began so late that it couldn’t include the first cohort of several thousand Reading First schools, which began implementing the program either by September 2002 or by September 2003. One outside expert involved in the early research-design discussions showed me a letter that the study’s director wrote requesting participants for one of the technical advisory groups. The letter was dated December 10, 2003—already very late in the day.

The letter also referred to IES’s plan for a random-assignment study that would involve a total of 30 Reading First schools and 30 “control” schools spread over six school districts. As late as 2004, however, the study design was undergoing changes. Instead of 30 Reading First schools in six districts, the study would compare 128 Reading First schools in 13 states to a control group of schools that applied for Reading First but didn’t qualify for the federal grants. And instead of using the “gold standard”—random-assignment design—the study would instead compare the schools using a statistical technique known as a “regression discontinuity model,” a less rigorous and comprehensive approach.

Outside experts warned IES about the methodological weakness of the study design. One reading scientist willing to speak on the record about these concerns is University of Illinois professor Timothy Shanahan, former president of the 85,000-member International Reading Association (the world’s largest professional organization of reading teachers and scholars) and a recent inductee into the Reading Hall of Fame. Shanahan told me that he asked IES officials about the study design and was told that it was too late to change it. Along with other experts, Shanahan also pointed out that the study was compromised because “the control groups were often doing the same thing that the Reading First groups were doing.” IES may have overlooked the fact that many states used up to 20 percent of their federal grant funds to encourage low-achieving schools to adopt the same instructional reforms mandated for the Reading First schools. Thus, says Shanahan, “The comparisons were not Reading First with non–Reading First schools, but Reading First with less–Reading First.”

Shanahan’s point about a contaminated control group has since been amplified with empirical precision by James A. Saltzman, codirector of Ohio’s Reading First program. In a policy paper, Saltzman took apart IES’s assumption that the study sample comprised either Reading First or non–Reading First schools. He offered evidence from the Cleveland school district, which saw impressive gains in reading after it participated in the Reading First program, to prove his point: “While 20 schools were funded by Reading First in the district, the district spent their own funds to run a parallel program that infused scientifically-based reading research (SBRR) strategies and practices alongside a SBRR-based core reading program. . . . Under this scenario, it’s not unexpected that IES cannot tease out any differences among RF and non-RF schools and, in fact, the lack of differences may say more about districts in the study recognizing the importance of SBRR practices and use of a strong core program. If this is true, then it is further evidence of the positive impact of Reading First” (italics in the original).

Like Cleveland’s, many other states and school districts report significant gains in reading achievement after carrying out one of the suggestions in Reading First legislation: to spend part of their federal grants to encourage scientifically based reading instruction in high-poverty, non–Reading First schools. (See my account of how Richmond, Virginia scored spectacular gains in test scores after using Reading First materials in all of its schools.)

I recently raised the issue of possible contamination of the study’s control group directly with IES officials. They answered by e-mail: “Spring 2007 surveys and district interviews (to be included in the study’s final report) will provide more information about the extent to which Reading First-like practices diffused to non-RF schools in our study sample.”

But the final IES report is not due for release until 2009. In the meantime, both the New York Times and the Washington Post (neither a supporter of Reading First) have found no reason to withhold final judgment. AN INITIATIVE ON READING IS RATED INEFFECTIVE, trumpeted the headline on a story by Sam Dillon, the Times’s longtime national education reporter. Dillon’s article quoted Senator Edward Kennedy, chairman of the Senate Education Committee: “The Bush administration has put cronyism first and the reading skills of our children last, and this report shows the disturbing consequences.” Of course, even Dillon knows that there wasn’t a word in the IES report about Bush administration “cronyism,” but he left Kennedy’s big lie out there unchallenged. Writing in the Washington Post, Maria Glod declared in her lead that “students enrolled in a $6 billion federal reading program that is at the heart of the No Child Left Behind law are not reading any better than those who don’t participate, according to a U.S. government report.” Neither reporter managed to describe accurately the study’s very limited conclusions. Nor did they bother talking to any of the study’s reputable critics, such as Shanahan or Saltzman.

Even before the final IES report appears next year, Kennedy and his Democratic colleagues will likely try to end Reading First. The Democrats have already cut the program by 60 percent for the coming fiscal year, and the flawed IES report could provide ammunition for the coup de grace. But IES officials should at least point out that influential people in Washington are drawing unwarranted conclusions from a study that many reputable reading scientists find deeply flawed. It would be a stunning display of irresponsibility to remain silent after their study has contributed to so much public misunderstanding.

Donate

City Journal is a publication of the Manhattan Institute for Policy Research (MI), a leading free-market think tank. Are you interested in supporting the magazine? As a 501(c)(3) nonprofit, donations in support of MI and City Journal are fully tax-deductible as provided by law (EIN #13-2912529).

Further Reading

Up Next