Photo by MIGUEL J. RODRIGUEZ CARRILLO/AFP via Getty Images

For three decades, American courts have relied on the Reference Manual on Scientific Evidence to help judges determine what qualifies as legitimate science in the courtroom. Published by the Federal Judicial Center, the manual is designed to support the Supreme Court’s 1993 ruling in Daubert v. Merrell Dow Pharmaceuticals that courts should serve as gatekeepers against unreliable scientific claims.

A newly released fourth edition, copublished with the National Academies of Sciences, Engineering, and Medicine, suggests that the manual is drifting away from that role. Changes to its treatment of scientific reasoning—and controversy surrounding its new material on climate science—demonstrate a broader shift in how the manual defines reliable evidence. If that shift takes hold, it will influence how courts evaluate some of the most consequential scientific claims in litigation.

Prior to the manual’s development in 1994, courts decided lawsuits based on available evidence, including by asking whether a scientific claim was “‘generally accepted’ as reliable in the relevant scientific community.” In Daubert, the Supreme Court made clear that judges must take a more active “gatekeeping role” when deciding whether scientific evidence is admissible. General acceptance alone was insufficient. Among other possible criteria, Daubert emphasized that evidence should be supported by “generating hypotheses and testing them to see if they can be falsified.”

Just as celebrated scientists have long taught, this is the methodology that “distinguishes science from other fields of human inquiry,” the justices wrote. The first three versions of the manual applied these principles to guide judges in managing scientific evidence in the courtroom.

The fourth edition dramatically changes course. This first drew attention when readers discovered a new “climate science” chapter that relied heavily on sources connected to groups supporting tort, nuisance, and other legal claims against fossil-fuel companies to advance climate priorities. The fourth edition also advocates for the use of novel climate “attribution” techniques that link specific harms to particular emissions, a methodology often used to support climate lawsuits.

Following criticism from state and federal officials, the Federal Judicial Center ultimately removed the climate-science chapter from the manual, though it remains in the National Academies’ online version. But the manual retains a completely revised chapter entitled “How Science Works” that elevates consensus and model-based advocacy above scientific results derived from observation and experiment.

That chapter offers a markedly different account of scientific practice. Rather than emphasizing testing and falsification, it describes science as a “complex, iterative, dynamic, and social” process, in which knowledge emerges gradually through interaction among researchers. Through this interactive process, the researchers develop ideas with an inherently “tentative nature.” In such a world, the chapter instructs, only “accepted scientific knowledge is reliable,” and what matters most is not whether a hypothesis can be rigorously tested but whether it becomes accepted by the scientific community.

The shift has important implications for how courts evaluate evidence. The chapter suggests that scientific knowledge can be obtained from a “mathematical model” serving as a “surrogate” for a real-world system, and that such a model’s predictions “may be considered evidentiary” by a court once they become “widely accepted.”

The chapter acknowledges that funding sources and made-for-litigation research can produce biased results, and that peer review is no guarantee of accuracy. Yet, while these concerns appear to be endemic in climate research, the chapter almost always illustrates them with examples of a defendant challenging plaintiff positions on issues like “the health effects of tobacco, ozone depletion, and climate change.” It briefly discusses the replication crisis in modern science, where researchers conducting the same experiments cannot replicate the results of earlier researchers. But it dismisses this issue as “a concern about individual investigations, not scientific consensus itself.”

Consensus within the dominant scientific “community,” by contrast, is emphasized throughout. The chapter devotes an entire section to “achieving scientific consensus” and urges judges to place greater weight on evidence believed by the largest number of people who “do” science. The “highest level of certainty science has to offer,” it suggests, is an idea included in “multiple widely used textbooks” and held by “multiple, independent unaffiliated consensus panels/conferences.”

The revised “How Science Works” chapter thus elevates consensus in ways that risk distorting the scientific method. Scientific consensus has often proved wrong. Falsification based on validated observations and experimental findings remains the defining feature of science. Advances in knowledge have always depended on researchers challenging widely accepted views and subjecting them to empirical scrutiny. Thanks to such efforts, we now know that the sun does not orbit the earth; that continents drift; that our planet is far older than once thought; that “phlogiston” and “luminiferous ether” don’t exist; that our Milky Way is one of countless galaxies in the universe; and that the “science” behind eugenics and lobotomies was ridden with racial and class prejudice.

More recent experience shows how easily such “consensus” models can mislead. High-profile projections—such as the “hockey stick” graph depicting rapid climate change, or dire forecasts of storm-related mortality—have at times been presented as “consensus” science, only to be revised or contested as underlying assumptions proved unreliable and actual measured outcomes forced course corrections.

Consensus and modeling clearly have a role in science. Courts routinely confront complex questions that can’t be resolved through simple experiments. But Daubert was meant to ensure that judges don’t simply defer to the experts who shape today’s dogma. Evidentiary jurisprudence demands that judges ask whether expert opinions rest on methods that can be tested, challenged, and validated in the real world.

The revised Reference Manual on Scientific Evidence risks blurring that distinction. By elevating consensus and modeling while downplaying empirical testing, it points toward a more permissive standard for deciding scientific truths —especially those involving predictive and highly complex scientific claims in lawsuits seeking to impose vast new economic costs on society.

The new version of the manual offers another instance of activists working to advance policy goals that have failed to pass muster through democratic means. When judges are asked to rely on what some have called a “hypothesis paired with a model,” often with little regard for costs or benefits, the line between scientific evidence and policy advocacy begins to erode. It’s a line that courts were meant to hold—not loosen.

Donate

City Journal is a publication of the Manhattan Institute for Policy Research (MI), a leading free-market think tank. Are you interested in supporting the magazine? As a 501(c)(3) nonprofit, donations in support of MI and City Journal are fully tax-deductible as provided by law (EIN #13-2912529).

Further Reading