Future Tense

What Comes After the Computer Chip? Better Brain-Computer Interfaces.

Intel co-founder Gordon Moore famously wrote about how the number of transistors on silicon chips would double roughly every two years—an observation now known as Moore’s Law. But even as Intel pushes into nanotechnology, computing is now reaching the limits of that law. On Thursday, March 21, former Intel CEO Craig R. Barrett and Arizona State University President Michael Crow will be at the Phoenix Art Museum to answer the question, “What comes after the computer chip?” Ahead of the event, which is being hosted by Zócalo Public Square, we’ll be publishing a series of blog posts in which experts weigh in. For more information, visit the Zócalo Public Square website. (Zócalo Public Square is a partnership of the New America Foundation and Arizona State University; Future Tense is a partnership of Slate, New America, and ASU.)

The evolutionary path of computing will no doubt result in ever increasing processing capacities through higher density and low power circuits, miniaturization, parallelization, and alternative forms of computing (such as quantum computing). These will address the demands of large-scale and big-data processing as well as the massive adoption of multimedia and multimodal computing in various applications. 

However, future computing devices will have to shift from data- and information-level processing to higher levels of cognitive processing. For example, computing devices will be able to understand subtle cues such as intent in a human communication rather than explicit cues such as prosody, expressions, and emotions. This will usher in a new era in computing in which the paradigm of humans interacting with computers in an explicit manner at higher levels of sophistication will be augmented by devices that also interact implicitly with humans. This “person-centered” engagement in which man and machine work as collaborative partners will allow for a range of tasks, from simple to complex. Computing devices on-body, in-body and in the environment, as well as next-generation applications, will require the user to engage in a symbiotic relationship with the devices termed “coaptivecomputing.”

Computing devices (like prosthetic devices) working coaptively with the user will assist her in certain tasks that are predetermined for their role and purpose and even learn explicitly through instructions from the user. More importantly, devices need to learn through implicit observations of the interactions between the user and the environment, thereby relieving the user of the usual “mundane” tasks. This will enable users to enhance their capability and function and engage at higher levels of cognition, which thus far, has not been possible due to the limited capacity for multisensory perception and cognition.

For example, the user may recall only a few encounters with people and things at an event simply, because she had a focused engagement with those particular people and objects. However, future computing devices can essentially recall all of the encounters in a “life log,” along with the context. This could prompt or inform the user as appropriate in their subsequent interactions. As coaption becomes more pervasive, the future of brain-computer interfaces will increasingly become a reality. 

No longer will we think of a computer chip as just a physical entity, but instead as a ubiquitous device conjoined and operating seamlessly with humans as partners in everyday activities.

More answers to the question “What comes after the computer chip?”

Biomedical breakthroughs, says Stanford’s H.-S. Philip Wong.
Nature-inspired computing, according to Stephen Goodnick.
The end of the “La-Z-Boy era” of sequential programming, writes Konstantin Kakaes.