In early March, British leaders planned to take a laissez-faire approach to the spread of the coronavirus. Officials would pursue “herd immunity,” allowing as many people in non-vulnerable categories to catch the virus in the hope that eventually it would stop spreading. But on March 16, a report from the Imperial College Covid-19 Response Team, led by noted epidemiologist Neil Ferguson, shocked the Cabinet of the United Kingdom into a complete reversal of its plans. Report 9, titled “Impact of non-pharmaceutical interventions (NPIs) to reduce COVID-19 mortality and healthcare demand,” used computational models to predict that, absent social distancing and other mitigation measures, Britain would suffer 500,000 deaths from the coronavirus. Even with mitigation measures in place, the report said, the epidemic “would still likely result in hundreds of thousands of deaths and health systems (most notably intensive care units) being overwhelmed many times over.” The conclusions so alarmed Prime Minister Boris Johnson that he imposed a national quarantine.

Subsequent publication of the details of the computer model that the Imperial College team used to reach its conclusions raised eyebrows among epidemiologists and specialists in computational biology and presented some uncomfortable questions about model-driven decision-making. The Imperial College model itself appeared solid. As a spatial model, it divides the area of the U.K. into small cells, then simulates various processes of transmission, incubation, and recovery over each cell. It factors in a good deal of randomness. The model is typically run tens of thousands of times, and results are averaged—a technique commonly referred to as an ensemble model.

In a tweet sent in late March, Ferguson—then still one of the leading voices within the U.K.’s Scientific Advisory Group for Emergencies (SAGE), tasked with handling the coronavirus crisis—stated that the model was implemented in “thousands of lines of undocumented” code written in C, a widely used and high-performing computing language. He refused to publish the original source code, and Imperial College has refused a Freedom of Information Act request for the original source, alleging that the public interest is not sufficiently compelling.

As Ferguson himself admits, the code was written 13 years ago, to model an influenza pandemic. This raises multiple questions: other than Ferguson’s reputation, what did the British government have at its disposal to assess the model and its implementation? How was the model validated, and what safeguards were implemented to ensure that it was correctly applied? The recent release of an improved version of the source code does not paint a favorable picture. The code is a tangled mess of undocumented steps, with no discernible overall structure. Even experienced developers would have to make a serious effort to understand it.

Modelling complex processes is part of my day-to-day work. It’s not uncommon to see long and complex code for predicting the movement of an infection in a population, but tools exist to structure and document code properly. The Imperial College effort suggests an incumbency effect: with their outstanding reputations, the college and Ferguson possessed an authority based solely on their own authority. The code on which they based their predictions would not pass a cursory review by a Ph.D. committee in computational epidemiology.

Ferguson and Imperial College’s refusal of all requests to examine taxpayer-funded code that supported one of the most significant peacetime decisions in British history is entirely contrary to the principles of open science—especially in the Internet age. The Web has created an unprecedented scientific commons, a marketplace of ideas in which Ferguson’s arguments sound only a little better than “the dog ate my homework.” Worst of all, however, Ferguson and Imperial College, through both their work and their haughtiness about it, have put the public at risk. Epidemiological modelling is a valuable tool for public health, and Covid-19 underscores the value of such models in decision-making. But the Imperial College model implementation lends credence to the worst fears of modelling skeptics—namely, that many models are no better than high-stakes gambles played on computers. This isn’t true: well-executed models can contribute to the objective, data-driven decision-making that we should expect from our leaders in a crisis. But leaders need to learn how to vet models and data.

The first step toward integrating predictive models into evidence-based policy is thorough assessment of the models’ assumptions and implementation. Reasonable skepticism about predictive models is not unscientific—and blind trust in an untested, shoddily written model is not scientific. As Socrates might have put it, an unexamined model is not worth trusting.

Photo: coward_lion/iStock

Donate

City Journal is a publication of the Manhattan Institute for Policy Research (MI), a leading free-market think tank. Are you interested in supporting the magazine? As a 501(c)(3) nonprofit, donations in support of MI and City Journal are fully tax-deductible as provided by law (EIN #13-2912529).

Further Reading

Up Next