Future Tense

What Will Come After the Computer Chip? The End of the “La-Z-Boy” Programming Era.

Intel co-founder Gordon Moore famously wrote about how the number of transistors on silicon chips would double roughly every two years—an observation now known as Moore’s Law. But even as Intel pushes into nanotechnology, computing is now reaching the limits of that law. On Thursday, March 21, former Intel CEO Craig R. Barrett and Arizona State University President Michael Crow will be at the Phoenix Art Museum to answer the question, “What comes after the computer chip?” Ahead of the event, which is being hosted by Zócalo Public Square, we’ll be publishing a series of blog posts in which experts weigh in. For more information, visit the Zócalo Public Square website. (Zócalo Public Square is a partnership of the New America Foundation and Arizona State University; Future Tense is a partnership of Slate, New America, and ASU.)

The important question to the end-user is not what comes after the chip, but how chips can be designed and integrated with sufficient ingenuity so that processing speed improves even as physics constrains the speed and size of circuits.

Ever since John von Neumann first enunciated the architecture of the modern computer in 1945, processors and memory have gotten faster more quickly than the ability to communicate between them, leading to an ever-worsening “von Neumann bottleneck”—the connection between memory and a CPU (or central processing unit).

Because chip features can no longer simply be made smaller, the only way forward is through increasing parallelism—doing many computations at once instead of, as in a classic von Neumann architecture, one computation at a time. (Each computation is essentially a logical operation like “AND” and “OR” executed in the correct order by hardware—it’s the basis for how a computer functions.)

Though the first multiprocessor architecture debuted in 1961, the practice didn’t become mainstream until the mid-‘00s, when chip companies started placing multiple processing units or “cores” on the same microprocessor. Chips often have two or four cores today. Within a decade, a chip could have hundreds or even thousands of cores. A laptop or mobile device might have one chip with many cores, while supercomputers will be comprised (as they are today) of many such chips in parallel, so that a single computer will have as many as a billion processors before the end of the decade, according to Peter Ungaro, the head of supercomputing company Cray.

Figuring out how best to interconnect both many cores on a single chip and many chips to one another is a major challenge. So is how to move a computation forward when it is no long possible to synchronize all of a chip’s processors with a signal from a central clock, as is done today. New solutions like “transactional memory” will allow different processes to efficiently share memory without introducing errors.

The overall problem is so difficult because the hardware is only as good as the software, and the software only as good as the hardware. One way around this chicken and egg problem will be “autotuning” systems that will replace traditional compilers. Compilers translate a program in a high-level language into a specific set of low-level instructions. Autotuning will instead try out lots of different possible translations of a high level program to see which works best.

Autotuning and transactional memory are just two of many new techniques being developed by computer scientists to take advantage of parallelism. There is no question the new techniques are harder for programmers. One group at Berkeley calls it the end of the “La-Z-Boy era” of sequential programming.

More answers to the question “What comes after the computer chip?”

Biomedical breakthroughs, says Stanford’s H.-S. Philip Wong
Better brain-computer interfaces, writes Sethuraman Panchanathan Nature-inspired computing, according to Stephen Goodnick.