In 1965, Gordon Moore from Fairchild Semiconductor, the legendary microchip pioneer, was asked to speculate on the future of integrated circuits. He looked around the lab, where engineers were starting to place roughly 60 components on a chip—a number that had doubled approximately every year since the origins of the planar chip design in 1959. He then made an astounding prediction: complexity and miniaturization would keep doubling every year, so that, in a decade, a chip would have 60,000 components, even as prices and costs continued to plummet. And so it happened, more or less, with the prediction for 1975 being remarkably close to the mark. Today, the most advanced chips have complexity counts above 100 billion.
It should come as no surprise to learn that we may now be close to the end of this process. But what is the meaning and significance of this limit? Where have we quietly been heading in the mad dash that Moore first described?
As Moore noted in later reflections, materials are made of atoms, so the process should reach an end when we start manufacturing transistors the size of a single atom. That has already happened: scientists at Tsinghua University in Beijing announced earlier this year that they have built a graphene transistor gate with a length of 0.34 nanometers, roughly the size of a single carbon atom.
Drawing features the size of an atom or a few atoms is a miracle of the transcendental. What tool could you even use? Wouldn’t it have to be at least as small as the features you want to draw? In reality, of course, you draw them with light, not with solid materials, in a process that some compare with black magic.
Applying the lithographic method, the minimum feature size is constrained by the minimum wavelength that can be applied. To produce the most advanced chips available today, you vaporize droplets of molten tin with two successive laser blasts. The resulting plasma emits extreme ultraviolet radiation, with a wavelength of only 13.5 nanometers. Various tricks and tweaks enable us to build transistors even smaller than the wavelength of the light used to pattern them. The clean room is illuminated with a special yellow light that contains no ultraviolet radiation. With this kind of light, you cannot rely on lenses any more, so mirrors are instead deployed to guide the light to a silicon wafer and draw transistors with features measuring five nanometers or less—the size of just a few atoms. A fingernail grows, on average, one nanometer per second.
After a layer of transistors has been drawn, many other layers need to be superimposed. Semiconductor chips are the skyscrapers of the infinitesimal. The more layers, the more complex and powerful the chip. The most advanced designs can have hundreds of layers, and they all need to align with nanometer precision. The ratio between surface area and height of a 64-layer memory chip is equivalent to three times the height of the Burj Khalifa in Dubai. But we’re already producing chips with 256 layers.
The point of miniaturization is less size than power. Electrons must move through the semiconductor material as fast as possible. The smaller the distances, the faster and more powerful the chip can be. Shrinking the transistors by half on a chip of a given size yields approximately four times the computing power because both the number and speed of the transistors have been increased. Approaching the limit of miniaturization thus means approaching the limit of how the universe works: a construct built from scratch.
The Intel 4004, the size of a fingernail, delivered the same computing power as the first electronic computer built in 1946, which filled an entire room. It was the first programmable processor on the market, following software instructions to perform many different functions on various devices. Released in 1971, it was built of approximately 2,300 transistors, a paltry sum by more recent standards, but the chip made it clear that these powerful microscopic machines would soon have the capacity to construct worlds of their own. The latest microchips can perform tens of trillions of calculations a second and build perfect simulations of physical stores, where shoppers can grab objects from virtual shelves and get billed through their mobile devices as they leave.
Something mysterious is at play here. We are desperately trying to squeeze every possible ounce of computing power out of semiconducting materials, seemingly anxious that the reservoir will run dry before we reach the goal. But what is the goal, if not a smaller and more powerful smartphone?
At some point, we are destined to reach the limit of particle physics. When we do, looking around, the world might not look so different, but it will be a newly built world, the result of computers mighty enough to re-create virtually all the essential elements of the physical world around us. The secret goal of the microchip is the metaverse. What else could be the endpoint of the search for the natural limits of complexity? The ultimate instance of complexity in the world is the world itself.
Here is what the final stretch might look like: large virtual environments rendered in real time, with data transferred at super-low latencies for hundreds of millions or billions of users sharing the same persistent life world. The immersive experience of your favorite games and platforms was powered by the last generation of microchip breakthroughs. Truly persistent and immersive computing, at scale and accessible by billions of humans in real time, will require even more—perhaps 1,000 times the computational efficiency available today. But as Raja Koduri of Intel puts it, “the dream of providing a petaflop of computer power and a petabyte of data within a millisecond of every human on the planet is within our reach.”
Photo: jiefeng jiang/iStock