Worst-Case Scenarios, by Cass R. Sunstein (Harvard University Press, 352 pp., $24.95)
Cass Sunstein’s intensely complex Worst-Case Scenarios explores how we should assess the risks of disasters, from abrupt global warming to nuclear attack, and craft our response to the risks. Sunstein doesn’t clearly answer whether it’s worth, say, launching a full-scale invasion of Pakistan to prevent its nukes from falling into terrorists’ hands, or spending $50 billion to restore barrier islands and protect the southern coastline of the U.S. from tremendous hurricanes. Rather, his book is a useful warning to be wary of those who seem to have all the answers—because such answers, even when considered within the most elegant theoretical frameworks, only raise more questions.
The prolific Sunstein, a professor at the University of Chicago Law School, illustrates how neither people nor politicians make decisions about the future in a passionless vacuum. Before September 11, he writes, even though Islamist terrorism was a real threat, the public would have rejected a politician who proposed a version of the Patriot Act; after the attacks, the law passed easily. Similarly, Americans don’t worry much about the dangers that climate change may pose. “The vividness of the attacks of 9/11 drives people’s probability judgments about terrorism, whereas no such incident caused by climate change is available to them,” Sunstein writes.
Further, he contends, in trying to minimize the possibility of one catastrophe—another September 11—we may devise a solution that seeds new catastrophe: a long war that inflames passions and creates more potential terrorists. Sunstein thus doesn’t subscribe to Dick Cheney’s “1 percent doctrine,” which holds that if the probability of a catastrophe is even 1 percent, we must treat that probability as a certainty and take steps to eliminate it. Reasonable people don’t want to shut down America’s borders completely because a cargo ship might smuggle in a nuclear weapon, for example, which would seem to be called for under a strict interpretation of the 1 percent doctrine; such a shutdown might itself cause thousands of deaths by increasing global poverty.
But Sunstein does concede that in a few situations, particularly when it’s impossible to know the probability of a certain hypothetical event, it makes sense to take action to avert a worst-case scenario, as long as we keep in mind that our actions could create an even worse situation. In his most cogent chapter, he contrasts the fate of the Kyoto Protocol, the treaty to cut greenhouse-gas emissions, with that of the earlier Montreal Protocol, a largely successful treaty to cut ozone-depleting gases. Both treaties dealt with complicated issues of science, politics, and conjecture, but Montreal succeeded where Kyoto may fail (while it’s in effect in Europe, neither the U.S. nor China has participated thus far in Kyoto-style emission reductions). In the earlier case, advocates convinced the American public and its leaders that the domestic and global benefits of protecting the ozone layer through readily accessible technology far exceeded the costs. America, under that famous radical environmentalist Ronald Reagan, was soon on board, assuring the treaty’s success. With Kyoto, by contrast, Americans—as well as Chinese leaders, now under pressure to control that country’s emissions—have decided so far that the specific costs simply aren’t worth the possibility of averting catastrophic climate change.
Sunstein believes that we overlook too much in making these calculations, and proposes that we pay “close attention to both the magnitude and the probability of harm.” For probability, he notes sensibly that leaders and the public should understand that “a one percent chance of 10,000 deaths is not worth less attention than a 50 percent chance of 200 deaths.” But the magnitude of the 10,000 deaths—from a terrorist attack, say—is far greater, and further has a much stronger impact on the psychology of the public. When assessing the magnitude of possible disasters, we should also consider whether damage from a potential catastrophe would be irreversible: replacing a bridge that has collapsed is straightforward enough, but bringing back a species extinguished by global warming, or an entire city decimated by a nuclear disaster, is impossible. Sunstein also suggests that Americans, who have benefited from emissions-producing industrialization, “have a special obligation to mitigate . . . harm or provide assistance” to Africa’s and India’s poorer, vulnerable residents, for whom the magnitude of climate change may be greater and who have fewer resources for reacting to disaster. (Of course, still more Africans and Indians might die of poverty induced by slower economic growth under a harsh global-warming treaty—showing how complex such considerations are.)
In two chapters, entitled “Money” and “The Future,” Sunstein further illustrates how difficult it is to assign a value to eliminating or reducing a risk—particularly a long-term risk. Let’s say, for instance, that a type of pollution today results in an increased risk of a type of cancer 20 years from now. How can we know how much it’s sensible to pay now to reduce that future risk, especially since the cancer may be easily curable by then? Or—here’s a real mind-teaser—how can we assess the value of a life not lived? That is, what is the cost of the risk that because of something we do today, somebody isn’t born three decades hence? While Sunstein’s discussion here is engrossing, its practical use is to show that we’re never going to be able to answer such hypotheticals with anything approaching certainty.
The problem with Sunstein’s approach in general is that we don’t know, and can’t know, the real risks in some cases where leaders need the most guidance—and such uncertainty multiplies over longer time frames. The risks and uncertainties that leaders must consider simply aren’t quantifiable and verifiable in the way that, for example, the risk of dying in a car wreck is (you can unambiguously prove that 40,000 people died in accidents on America’s highways last year, and that the risk this year is similar, barring breakthroughs in auto- or highway-safety technology). Sunstein’s suggestion that we consider qualitative variables—such as the likely magnitude of harm and the impact of a catastrophe on those least able to bounce back from it—is reasonable, but in the end only adds more uncertainty.
Unfortunately, the book, while an interesting enough intellectual tour, leaves policymakers and citizens with a less-than-earth-shattering conclusion: we should consider scenarios like nuclear and natural catastrophes, but not paralyze ourselves with worry. If we can take steps to eliminate or reduce these risks, we should, but only if we think those steps are worth it and if they won’t create their own problems, and we should keep in mind the effects that our actions will have not only on ourselves but on poor people, the environment, and on future generations. Of course, the discussion over which risks are reasonable and which steps are worthwhile belongs in the arena of public debate. And in the end, we may never know if we made the right decision, because we can’t see the road not taken.