What We Owe the Future, by William MacAskill (Basic, 352 pp., $27.99)

Oxford philosopher William MacAskill thinks morality requires us to take care of the future. His new book, What We Owe the Future, seeks to explain what this obligation amounts to, and intervenes in various philosophical debates about morality and moral status. What We Owe the Future is written in a simple style, a shining example of the conventions of the analytic philosophy tradition—conventions from which many philosophers quickly depart when writing for the public. His views are stated so clearly as to resemble slogans but defended so rigorously as to seem the opposite. “You can shape the course of history,” MacAskill writes—and, he adds, you ought to.

Specifically, you should do what you can to ensure that the future has lots of happy, prosperous people. You should focus on those choices with the most significant, persistent, and contingent effects, in MacAskill’s terms: effects that lead to the most good (significance) for the longest time (persistence), and that would not come about if you didn’t act (contingency). For MacAskill, you can do two kinds of important things for humanity: help secure its survival and change its trajectory. Imagine a graph on which ensuring survival means making sure the line signifying human existence over time is long, and changing trajectory means making sure the line signifying human quality of life is high. MacAskill wants to maximize the total good in the world’s future, which, on the graph, would be the area under the curve across all the time that has yet to pass. The thesis that this is what we ought to do is called “longtermism.”

With these theoretical starting points established, the rest of the book discusses ways to accomplish these tasks—changing bad mores or locking in good ones; avoiding extinction, collapse, and stagnation—and responds to some philosophical debates about just what the right and the good are in the context of whole civilizations, or what’s sometimes called “population ethics.” Once we recognize that the future matters, MacAskill thinks, we’ll have to recognize that it matters a lot. For one thing, the population might grow significantly. But even if it plateaus, the vast expanse of time in which civilization might continue means there could be far more people in the future than there are in the present and were in the past. By sheer numbers, the future matters most.

So what, then, should we do for the future? How can we care for it? The most obvious way is by avoiding extinction and collapse. We should protect against future pandemics, against asteroid impacts, against nuclear war, against climate change, and against fuel depletion. MacAskill discusses ways to account for the fact that the future might not be human: that in the event of human extinction, another sapient species could evolve in our place, or that an artificial intelligence we create could replace us. In addition to the philosophical content, the range of empirical issues MacAskill addresses is impressive. What caused humans more or less universally to come to see slavery as evil? Will our civilization stagnate? Will a highly powerful artificial intelligence share our values? MacAskill employs various methods to figure out what we’re capable of doing and, more generally, what types of events are likely to occur, in order to inform his discussion of what we ought to do.

What We Owe the Future has drawn various criticisms. Has MacAskill—called a “reluctant prophet” in The New Yorker, a “technocrat” in the Wall Street Journal, and a “philosopher-geek” in UnHerd—written “a thrilling perspective for humanity,” as the Guardian has it, or is his perspective “tailor-made to allow tech, finance and philosophy elites to indulge their anti-humanistic tendencies,” as a Washington Post reviewer argued? MacAskill’s book has also seen treatments from an impressive array of philosophers, among them Julian Baggini, Richard Chappell, Regina Rini, Kieran Setiya, and Kathleen Stock.

Some of these assessments are not convincing. Christine Emba, writing in the Washington Post, thinks longtermism is somehow too easy. “Conveniently, focusing on the future means that longtermists don’t have to dirty their hands by dealing with actual living humans in need, or implicate themselves by critiquing the morally questionable systems that have allowed them to thrive,” Emba writes. “A not-yet-extant population can’t complain or criticize or interfere, which makes the future a much more pleasant sandbox in which to pursue your interests—be they AI or bioengineering—than an existing community that might push back or try to steer things for itself.” But it’s hardly difficult or dangerous to attack morally questionable systems in the age of social media. In the Wall Street Journal, Barton Swaim writes: “Skeptical readers, of whom I confess I am one, will find it mildly amusing that a 35-year-old lifelong campus-dweller believes he possesses sufficient knowledge and wisdom to pronounce on the continuance and advancement of Homo sapiens into the next million years.” I’ll cop to being a fellow 35-year-old campus-dweller, but the shot at MacAskill’s putative arrogance seems unfair.

Other analyses have assailed the idea that we should care about the future at all. For these critics, only the here and now, the people alive today, are important. But there’s little difference in principle between refusing to help someone because they are in a different time and refusing to help someone because they are in a different place. MacAskill’s longtermism is an extension of the original motivation behind “effective altruism,” rooted in an argument made by the philosopher Peter Singer about saving a drowning child. Singer thought that if it would be monstrous not to jump in and save a child you happened to see drowning one day because it might ruin an expensive shirt, it would be equally monstrous not to donate money that would save a starving child in some impoverished country. For Singer, physical proximity didn’t make a moral difference. On these terms, it’s hard to see how temporal proximity does, either.

A similar objection notes that future people don’t exist yet. Writing in the Boston Review, Setiya argues that it’s natural to think that an event wiping out 99 percent of the world’s population is almost as bad as an event leaving the world’s population extinct. On the other hand, MacAskill thinks that the world’s population going extinct is much worse, because it also, in some sense, kills billions more who might someday live if humanity successfully propagates again. Whose intuition is right? Setiya and MacAskill also disagree about how we should view the creation of more people. “Once someone is born, you should welcome their existence as a good thing,” writes Setiya. “It doesn’t follow that you should have seen their coming to exist as an improvement in the world before they came into existence.” But how could adding something good, without even pairing it with anything bad, fail to be an improvement?

The conversation surrounding MacAskill’s book raises deep questions about morality. Some philosophers, MacAskill included, think it makes sense to render ethical judgments about situations (or “states of affairs”). For these thinkers, ethical judgments about other things often emerge from those original judgments about situations, as when we judge actions by the states of affairs they might bring about or foreclose. But other philosophers think that situations aren’t so important. Rather, for them, ethics is about comparing actions, intentions, character traits: deeper parts of human persons.

Many critics thus see in MacAskill’s longtermist stance a lack of humanity. Calling his review “The New Moral Mathematics,” Setiya writes, “Morality isn’t made by us—we can’t just decide on the moral truth—but it’s made for us: it rests on our common humanity, which AI cannot share.” Stock, in UnHerd, calls longtermism “unashamedly nerdy,” a matter of “logic-chopping” for “skinny, specky, brainy philosophers,” which uses “graphs, tables and graphics” to “capture the imaginations of the sort of tech-bro entrepreneur fascinated by the possibility of freezing his own head.” Rini, in the Times Literary Supplement, writes: “What We Owe the Future is a book of compartmentalization, of ideas bracketed away from each other behind lab-window hypotheticals.” Rini sees in the book “moral mathematics” done in a “vast, impersonal universe.” The sort of moral philosophy undertaken here is, for Rini, more like “actuarial science fiction.” For these reasons, Rini sees What We Owe the Future as “more clever than wise.”

But the use of math, graphs, and spreadsheets hardly seems unwise, and logic-chopping hardly seems inappropriate, for a book on morality. Society already makes decisions in highly mathematized and impersonal ways. During the pandemic, for example, we had to weigh children’s frustration and learning loss against greater death rates among vulnerable populations. Before 2020, we might have thought that it was a good idea to weigh the possibility of a deadly pandemic against the likelihood and costs of other tragedies and disasters, including the sorts of scenarios MacAskill discusses in his book (such as an asteroid crashing into the Earth). The math doesn’t tell us what to value, but it does tell us what to do once we know how to value it. Simply feeling our way to answers to such questions seems a daunting task. Though we can’t immediately infer something about morality on the individual level from what we care about in morality at the societal level, such commonplace calculations cut against these criticisms. Unless you think—and some philosophers do think this—that the large-scale future consequences of our practices don’t matter at all, it’s hard to see how the technical tools used to predict and quantify those consequences could be a poor fit for a book of applied ethics.

Some reviews mention the possibility that we just can’t predict anything about the future—that it’s hubristic to think that we know much about what will happen, and what humans will be like, in more than a few years, and that all the graphs and charts are just a way of avoiding this simple fact. MacAskill discusses this difficulty and suggests three rules of thumb: take actions that fit a range of possible futures; increase the number of future options; and learn more. Even if we don’t know which of future pandemics or devastating asteroid collisions are more likely, we can promote general science education, which could help us prepare for both.

In any event, though, this criticism misses the mark. First, it’s not clear why it should matter that we have trouble predicting the future. Suppose we just don’t know how likely it is that an asteroid will demolish our civilization. Should we just not care, morally, about such an eventuality? Second, if we really do have so much trouble predicting the future, that’s a problem for everyone, not just for MacAskill. Almost all contemporary moral and political projects claim to predict some potential goods that they might effect, and harms that would occur if they were not enacted. Third, if What We Owe the Future is persuasive when it comes to the future mattering more than the present, then even a small chance of helping the future is worth more than a big chance of helping the present. And fourth, it seems obvious that we can predict the future in broad strokes. An asteroid impact would make it more likely that humanity would go extinct. Research in areas like pandemic response and artificial intelligence will probably continue to advance in the near term. And so on. Such predictions may be sufficient to apply the longtermist ethos in a concrete way.

The reviews reveal two divides within philosophy, one philosophical and one cultural. The philosophical divide has to do with the nature of morality. Some think that what’s moral or immoral has something inherently to do with humanity: morality is constructed by or, as Setiya says, for us. Moral facts aren’t floating around to be discovered in a lab or plotted on a graph; they’re part of the social world, not the background structure of reality. Others believe that morality exists “prior to” human beings. Just as the universe is built up with atoms and with distinctions of space and time, so, too, is it built up with “moral facts” and distinctions of right and wrong, good and bad.

The cultural divide, meanwhile, has to do with philosophy as a field of study. Is it more like math and science, or is it more like literature? Should philosophers be familiar with discoveries in physics, theorems of algebra, and tools from statistics? Or should they be familiar with world history, the great works of art, and contemporary politics? One might think that philosophers are well-placed to bridge these two cultures by bringing rigorous argumentation to perennial problems. But some see in the clarity that rigor affords a false claim to distance from the worries of the world. Attacks start flying in at an almost-personal level—“You’re claiming not to have biases!” “You’re claiming to be objective!” “You’re claiming to be omniscient!”—all for trying to make one’s propositions as clear as possible. Such attacks inevitably reveal more about the speaker than about the target.

Nevertheless, MacAskill’s attempt to think of morality on the model of math and science errs in a key respect: his account of value change. One way of making things better for the future, MacAskill thinks, is to improve our values today or to start guiding our values toward improvement in the future, so that people are treated better in the future by their own contemporaries. MacAskill is careful, and correctly so, to note that we have every reason to think that, just as previous generations have gotten many things wrong about morality, we today are also getting many things wrong. Unfortunately, he still seems to think that moral progress unfolds in the same way as scientific and technological process.

That analogy doesn’t work. Demonstrating the supposed truth of modern moral convictions to individuals from the past wouldn’t be as simple as demonstrating the reliability of our science or the efficacy of our technology. By the same token, someone from the future would struggle to demonstrate to people of the present that their future values are any better than ours. Society might look on future values as being just as disastrously wrong as present or past ones.

Indeed, plenty of people find no improvement from the past to the present—lamenting that things have been going downhill since the Protestant Reformation, the Glorious Revolution, or, in my case, the 1990s. How can MacAskill be so confident that contemporary humans are morally correct relative to the past while being so hesitant to assert that contemporary humans will end up morally correct relative to the future? After all, future humans will surely possess values just as distant from our own.

In a famous passage from his “Theses on the Philosophy of History,” the Frankfurt School philosopher Walter Benjamin wrote:

A Klee painting named ‘Angelus Novus’ shows an angel looking as though he is about to move away from something he is fixedly contemplating. His eyes are staring, his mouth is open, his wings are spread. This is how one pictures the angel of history. His face is turned toward the past. Where we perceive a chain of events, he sees one single catastrophe which keeps piling wreckage and hurls it in front of his feet. The angel would like to stay, awaken the dead, and make whole what has been smashed. But a storm is blowing in from Paradise; it has got caught in his wings with such a violence that the angel can no longer close them. The storm irresistibly propels him into the future to which his back is turned, while the pile of debris before him grows skyward. This storm is what we call progress.

MacAskill calls on us to turn around, to become the storm. It is a strange-sounding call. But ultimately, it rests on more sensible foundations than many contemporary visions of the moral life, whose proponents seem so incensed that MacAskill’s longtermism has entered the fray as a humble and workmanlike competitor.

Photo: smshoot/iStock

Donate

City Journal is a publication of the Manhattan Institute for Policy Research (MI), a leading free-market think tank. Are you interested in supporting the magazine? As a 501(c)(3) nonprofit, donations in support of MI and City Journal are fully tax-deductible as provided by law (EIN #13-2912529).

Further Reading

Up Next