Intel co-founder Gordon Moore famously wrote about how the number of transistors on silicon chips would double roughly every two years—an observation now known as Moore’s Law. But even as Intel pushes into nanotechnology, computing is now reaching the limits of that law. On Thursday, March 21, former Intel CEO Craig R. Barrett and Arizona State University President Michael Crow will be at the Phoenix Art Museum to answer the question, “What comes after the computer chip?” Ahead of the event, which is being hosted by Zócalo Public Square, we’ll be publishing a series of blog posts in which experts weigh in. For more information, visit the Zócalo Public Square website. (Zócalo Public Square is a partnership of the New America Foundation and Arizona State University; Future Tense is a partnership of Slate, New America, and ASU.)
In 1965, Gordon E. Moore, the founder of Intel, noted that the number of components in integrated circuits had doubled every year since their inception in 1958 and predicted that this annual doubling would continue for at least another ten years. Since that time, the power of computers has doubled every year or and a half, yielding computers which are millions of time more powerful than their ancestors of a half century ago. The result is the digital revolution that we see around us, including the Internet, iPhones, social networks, and spam.
Since Moore's observation, the primary method of doubling has been to make the wires and transistors that transmit and process information smaller and smaller: The explosion in computing power comes from an implosion in the size of computing components. This implosion can't go on forever, though, at least given the laws of physics as we know them. If we cram more and more, smaller and smaller, faster and faster components onto computer chips, they generate more and more heat. Eventually, the chip will melt. At the same time, basic semiconductor physics makes it difficult to keep increasing the clock speed of computer chips ever further into the gigahertz region. At some point—maybe even in the next decade or so—it will become hard to make semiconductor computer chips more powerful by further miniaturization.
At that point, the most important socio/economic event that will occur is that software designers will finally have to earn their pay. Not that they are not doing good work now—merely that they will have to use the resources available rather than simply assuming that computer power will have doubled by the time their software comes to market, thereby supporting the addition slop in their design. Enforced computational parsimony might not be a bad thing. The luxury of continual expansion of computer power can lead to design bloat. Is Microsoft Word today really better than Word in 1995? It is certainly more obnoxious about changing whatever word you are trying to write into the word it thinks you want to write.
The inevitable end to Moore's law for computer chips does not imply that the exponential increase in information processing power will end with it, however. The laws of physics support much faster and more precise information processing. For a decade and a half, my colleagues and I have been building prototype quantum computers that process information at the scale of atoms and elementary particles. Though tiny and computationally puny when compared with conventional chips, these quantum computers show that it is possible to represent and process information at scales far beyond what can be done in a semiconductor circuit. Moreover, quantum computers process information using weird and counterintuitive features of quantum mechanics that allow even these small, weak machines to perform tasks—such as simulating other quantum systems—that even the most powerful classical supercomputer cannot do.
Computation is not the only kind of societally relevant information processing that is improving exponentially. Dave Wineland of the National Institute of Standards and Technology shared the Nobel Prize in Physics this year in part for his work on quantum computing, but also in part for his use of funky quantum effects such as entanglement to construct the world's most accurate atomic clocks. Conventional atomic clocks make up the guts of the global positioning system. Wineland's novel clocks based on quantum information processing techniques have the potential to make GPS thousands of times more precise. Not just atomic clocks, but essentially every technology of precision measurement and control is advancing with its own “personal Moore's law.” The result is novel and startling developments in nanotechnology, medical devices and procedures, and personal hardware, including every known way of connecting to the Internet.
Finally, if we look at the ultimate limits to information processing, the laws of quantum mechanics and elementary particles allow much more extreme computation than could ever be found on a computer chip. Atomic scale computation? How about quark-scale computation? The ultimate level of miniaturization allowed by physical law is apparently the Planck scale, a billion billion billion times smaller than the current computational scale. And why just make things smaller—why not build larger computers? Why not enlist planets, stars, and galaxies in a universal computation? At the current rate of progress of Moore's law, in 400 years, the entire universe will be one giant quantum computer. Just don't ask what the operating system will be.
More answers to the question "What comes after the computer chip?"