Railroads, the automobile and aircraft, chemistry and pharmaceuticals, electricity, telephone, radio and television, the Internet: each brought about deep transformations, the kinds that define historical epochs. Yet no one predicted any of them. And once each was underway, forecasters then also failed to predict the nature and scale of the impact.
By definition, forecasting is necessary to all forms of planning, whether in business or government. It directly affects decisions made in the present. Sometimes the goal of a forecast is to facilitate beneficial outcomes, or prepare for downsides. In other cases, it may be to direct development framed around big aspirations, such as curing cancer or colonizing Mars or eliminating fossil-fuel use. It’s a quasi-profession, often mocked, but one with a serious mission nonetheless. “The future cannot be predicted,” physics Nobelist Dennis Gabor wrote in 1963, “but can be invented.”
John Perry Barlow’s twist on Gabor’s aphorism is perhaps more relevant today. Barlow, who died earlier this year, was a poet and essayist, and, most famously, a lyricist for the Grateful Dead as well as founder of the Electronic Frontier Foundation. He said: “The best way to invent the future is to predict it—if you can get enough people to believe your prediction, that is.” He was right. Persuasive prediction can not only sell books but also shape thinking and influence business and government spending.
The key to useful technology forecasting is to figure out not just what is likely to happen, but more important, when, and at what scale. Take energy, for example, without which nothing else in human society is possible. Energy-technology forecasts date back to the dawn of the Industrial Revolution. William Jevons, an obscure economist and a nineteenth-century contemporary of Thomas Malthus, was the first to focus not only on the rate of exhaustion of an energy resource—in his case, coal—but also on the underlying technology dynamics. Our modern era of professionalized energy forecasting, and with it the creation of massive energy bureaucracies, began with the 1973 Arab oil embargo. Over a period of two months, that event caused petroleum prices to spike by 400 percent. Policymakers and forecasters got busy. In 1974, the editors at U.S. News and World Report, noting the “staggering . . . rate of change” in innovation, published a book containing technology forecasts for the end of the twentieth century. “Unless a massive effort to solve the [energy] problem is launched immediately,” the editors warned, “Americans face a doomsday future.”
By century’s end, it was clear that most of these energy-related forecasts had proved wrong. Among the misses: the internal-combustion engine would be “a thing of the past” before the twentieth century ended; “shortages of natural gas” would become severe; oil scarcity would require “massive research efforts to develop alternative fuels”; “no hands” driving would make possible the “100 mile-per-hour beltway”; and the advent of “high-speed trains in vacuum-sealed tubes.” (Elon Musk has resurrected the dream of that last one.) Today’s energy forecasting is no longer motivated by fears of shortages of hydrocarbons but instead, ironically, by concerns about our supposedly excessive use of abundant supplies—and the resulting carbon-dioxide emissions.
An ardent advocate for a different energy future, Bill Gates has called for major U.S. and global efforts to find technological “miracles” in energy domains. By “miracle,” Gates means something that might seem impossible now—in the same way that, in earlier times, no one imagined, say, the personal computer or the polio vaccine. Finding such miracles, Gates concedes, is inherently a “very uncertain process,” for which there is no “predictor function.” On that, I believe he’s wrong in one important sense: there is a kind of predictor function when it comes to the forecasts themselves—most are predictably wrong.
History shows a pattern of three inter-related classes of failed predictions.
The first comes from a form of presentism—a failure to appreciate lessons from earlier technological transformations. Even when these are acknowledged, the response is often “this time, it’s different.” By 1974, for example, we had seen, over the previous few decades, a series of technological miracles collectively more amazing than anything that has followed more recently. Those included the advent of nuclear energy, the first solar photovoltaic systems, and satellites; the moon landing; commercial jet travel; the invention of the transistor, the fiber-optic cable, and lasers; and the commercialization of—and much hand-wringing about—“thinking machines.” Those groundbreaking technologies framed how people of that day thought about the future. One is compelled to note the obvious: that thoughtful people in earlier eras were every bit as wise as anyone today. (Perhaps wiser, some would argue.)
But modernity seduces, and we reflexively believe that we’re smarter today because, well, we have smartphones. I recommend finding a copy of Today Then, a collection of several dozen essays, each a forecast about the year 1993, written by an expert or prominent thinker in 1893—a time of massive technological transformation—and originally published on the occasion of the 1893 Chicago World’s Fair. It was the pinnacle of the rail age; society had vaulted from an agrarian to industrial economy. The age of electricity and the automobile was dawning. The failure to consider lessons from history could be called Lewis’s Law, after C.S. Lewis, who, in The Screwtape Letters, characterized as a kind of sin the modern propensity to disregard the lessons offered by wise predecessors who aren’t “modern.”
The second forecasting failing we could call a Moore’s Law fallacy—it’s a kind of transference failure, or category error, and it’s not new. Nineteenth-century amazement at the steam engine and electricity inspired similar techno-forecasts, and arguably spawned the genre of science fiction, with Mary Shelley’s Frankenstein. Nuclear energy inspired forecasts of atomic-powered aircraft and cars, and even flying cities, not to mention dystopian visions. Today, the wizards of Silicon Valley invoke Moore’s Law as the principle that will lead to a new energy future. First proposed in 1965 by Gordon Moore, an Intel co-founder, Moore’s Law was an observation that became a prediction: the number of transistors fabricated on a single silicon microchip would double every two years. Moore’s Law has yielded staggering gains in computing power and cost-effectiveness. Compared with the dawn of modern computing, today’s information hardware consumes over 100 million times less energy per logic operation, while working in a physical space more than 1 million times smaller. A single smartphone is thousands of times more powerful than a room-sized IBM mainframe from the 1970s.
A surprising number of eager forecasters believe that this kind of innovation is imminently achievable with energy technology. Here, they conflate the profound differences between the physics of information and the physics of energy. If energy systems could follow a Moore’s Law trajectory, an automobile engine, for example, would shrink to the size of an ant, while producing a thousand-fold more horsepower. Engineers can indeed build ant-sized engines—but those engines produce 100 billion times less power than a car. No amount of money or Silicon Valley magic can cause a car engine’s power, or its equivalent, to disappear into your pocket. Moore’s Law-like improvements in energy aren’t just unlikely; they can’t happen, given the physics.
In the world of people, cars, planes, trucks, and large industrial systems—as opposed to the world of algorithms and bits—hardware tends to expand, not shrink, along with speed and carrying capacity. To note one example: the efficiency of wind turbines increases with size; the newest ones dwarf the Washington Monument. The energy needed to move a ton of people, make a ton of steel or silicon, or grow a ton of food is determined by properties of nature whose boundaries are set by laws of gravity, inertia, friction, mass, and thermodynamics.
This is not to say that Silicon Valley and information technology won’t usher in dramatic changes to the production of energy and physical goods. Information and analytics can wring far more efficiency out of physical and energy systems, from solar panels to shale rigs: call it the Uber or Amazon effect. Businesses and even social structures will get disrupted, but the outcome won’t be miracles analogous to discovering petroleum, nuclear fission, or the photovoltaic cell.
The third class of forecasting failing should be called Amara’s Law. Stanford computer scientist Roy Amara deserves credit for observing that forecasters tend to overestimate short-term technology change and underestimate the long term. History shows that, when it comes to major technological dislocations, most forecasts get not only the “what” incorrect but also the “when.” Experts commonly fail to appreciate the amount of time it takes for engineering to progress from invention to practicality, and then again toward the “inflection point”—the point at which a new technology is practical enough to enter widespread use, often characterized in projections by the well-known hockey-stick curve.
Everything looks like an overnight success after the inflection point. Andy Grove, Intel’s storied CEO, frequently wrote about the importance of understanding the engineering challenges and time needed to reach that milestone. It took 20 years after the invention of the automobile, for example, before a practical design emerged, the Model T. Then nearly 20 more years passed until the automobile inflection started, when sales took off. Similarly, it was nearly 20 years from first fission to the first commercial nuclear reactor. And nuclear-power technology turns out to be so difficult that we’re still waiting for its inflection point.
Another example; getting to the moon seemed to happen quickly, but it was 40 years after the invention of the rocket that John F. Kennedy issued his challenge, and then almost another decade before the 1969 landing. Despite the dreams of Jeff Bezos and Elon Musk, we’re still waiting for the inflection point on this achievement, too—say, putting lots of people into outer space, whether to work, live, or take vacations.
Even the vaunted pace of computers has followed the same slow pattern. It took nearly 20 years after the invention of the first electronic computer for the development of a practical commercial computer, the Univac, to reach fruition. It took another 20 years before the mainframe inflection point, and still another 20 before the PC inflection started. It was also 20 years from what we could call first “fission” for the Internet before Amazon would go public in 1997. Then, another 20 years passed before it became obvious that e-commerce was reaching an inflection point. E-commerce is taking off now, of course—but it still constitutes less than 10 percent of all retail.
Amara’s Law essentially acknowledges the inertia of innovation. It takes decades to convert a foundational invention or discovery into a commercially viable product. And then it takes additional decades before that product starts scaling to society-level deployment. Ironically, once the inflection point is reached, history shows that forecasters then underestimate the pace of change and disruption.
None of these three classes of forecasting failures implies that there is an end to the magic of innovation. But when it comes to finding the next miracle, whether in energy or in any technology domain, we should invoke what we might now call Gates’s Law: the timing and character of transformational discovery has no “predictor function.” We can’t predict the what, the when, or even the where.
I’ll go on record with two predictions, though. First, today’s popular energy forecasts, both aspirational and dystopian, will be found to be wrong, again. Second, the future will reveal new physics, and new foundational innovations—but you won’t hear about any of it in today’s forecasts.