Science

How Much Can Your Brain Actually Process? Don’t Ask.

Stop measuring brains against digital computers. 

neurons firing.

Measuring the size of the human brain? Impossible.

pixologicstudio/Thinkstock

Earlier this month, a computer program called AlphaGo defeated a (human) world champion of the board game Go, years before most experts expected computers to rival the best flesh-and-bone players. But then last week, Microsoft was forced to silence its millennial-imitating chatbot Tay for blithely parroting Nazi propaganda and misogynistic attacks after just one day online, her failure a testimony to the often underestimated role of human sensibility in intelligent behavior. Why are we so compelled to pit human against machine, and why are we so bad at predicting the outcome?

As the number of jobs susceptible to automation rises, and as Stephen Hawking, Elon Musk, and Bill Gates warn that artificial intelligence poses an existential threat to humanity, it’s natural to wonder how humans measure up to our future robot overlords. But even those tracking technology’s progress in taking on human skills have a hard time setting an accurate date for the uprising. That’s in part because one prediction strategy popular among both scientists and journalists—benchmarking the human brain with digital metrics such as bits, hertz, and million instructions per section, or MIPS—is severely misguided. And doing so could warp our expectations of what technology can do for us and to us.

Since their development, digital computers have become a standard metaphor for the mind and brain. The comparison makes sense, in that brains and computers both transform input into output. Most human brains, like computers, can also manipulate abstract symbols. (Think arithmetic or language processing.) But like any metaphor, this one has limitations.

Reported estimates of how much data the brain holds in long-term memory range from 100 megabytes to 10 exabytes—in terms of Thriller on MP3, that’s either one album or 100 billion albums. This range alone should give you an immediate sense of how seriously to take the estimates. A typical calculation goes something like this: There are about 100 billion neurons in the human brain. On average each sends outgoing connections to roughly 10,000 other neurons. Calculating the capacity of these connections, or synapses, is assumed to give a sense of the brain’s total capacity. So let’s say the strength of a synapse can be described using 1 byte (8 bits). Multiply 100 billion neurons by 10,000 synapses by 1 byte. The result: 1 petabyte. Part of this method is straightforward, in that neurons and synapses are fairly discrete and countable. But a problem arises when we try to digitize the strength of a connection between neurons. Sometimes 1 byte is offered as the amount of data needed to characterize a synapse. But in fact its information content is extremely hard to quantify.

That’s because in many ways, the brain is not digital but analog.

The fundamental difference between analog and digital information is that analog information is continuous and digital information is made of discrete chunks. Digital computers work by manipulating bits, ones, and zeroes. And operations on these bits occur in discrete steps. With each step, transistors representing bits switch on or off. Jiggle a particular atom on a transistor this way or that, and it will have no effect on the computation, because with each step the transistor’s status is rounded up or down to a one or a zero. Any drift is swiftly corrected.  

On a neuron, however, jiggle an atom this way or that, and the strength of a synapse might change. People like to describe the signals between neurons as digital, because a neuron either fires or it doesn’t, sending a one or a zero to its neighbors in the form of a sharp electrical spike or lack of one. But there may be meaningful variation in the size of these spikes and in the possibility that nearby neurons will spike in response. The particular arrangement of the chemical messengers in a synapse, or the exact positioning of the two neurons, or the precise timing between two spikes—these all can have an impact on how one neuron reacts to another and whether a message is passed along.

Plus, synaptic strength is not all that matters in brain function. There are myriad other factors and processes, both outside neurons and inside neurons: network structure, the behavior of support cells, cell shape, protein synthesis, ion channeling, vesicle formation. How do you calculate how many bits are communicated in one molecule’s bumping against another? How many computational “operations” is that? “The complexity of the brain is much higher at the biochemical level” than models of neural networks would have you believe, according to Terrence Sejnowski, the head of the Salk Institute’s Computational Neurobiology Laboratory. “The problem is that we don’t know enough about the brain to interpret the relevant measurement or metric at that level.”

Yet many of these details matter. Studies into other creatures’ brains, for example, have shown that there’s something more to neurons than we can replicate digitally, at least for now. For example, Paul Roossin, a neurobiologist and A.I. researcher, notes that the 302 neurons in the C. elegans roundworm “do some quite amazing learning and behavior.” But “try to make an artificial neural net with 302 neurons, and you’ll see that it can hardly do anything. So those other parameters must be important—or else there are noncorporeal elements at play, such as worm souls.” Build a computer that runs on worm souls, and you’ll be rich. Otherwise, we may need to get every physical detail right. Consider the butterfly effect: In a dynamical physical system like the weather or a brain, a tiny difference can end up making a big difference. And if we need to encode infinite detail, we need infinite bits.

In practice, though, the details that matter might not be infinite. Although each nuance of the brain will necessarily affect its behavior at least slightly, those nuances might not all carry information important for the brain’s overall functioning. In neural signaling there’s a lot of noise, or random variance—similar to radio static. Zoom down enough, experts argue, and you hit a floor, beyond which there’s no usable information. Random quantum fluctuation is not computation.

But scientists are far from determining what counts as noise and what counts as signal. Which variations matter? “Some (or most) noise may be signal we have not explained yet,” Alexander Dimitrov, a computational neuroscience and expert on information theory at Washington State University, Vancouver, wrote to me over email. “So for now ‘noise’ is an operational definition of ‘the stuff we don’t understand yet.’ ” According to Ken Miller, co-director of Columbia University’s Center for Theoretical Neuroscience, the parsing of neural signal and noise is “not a very crisply defined question yet.” The noise floor apparently does not present a discrete boundary. Instead, it’s covered in shag carpet, full of buried crumbs of information.

One way to bypass the task of picking out the crumbs and measuring them each in bits is to ask how big a silicon computer we’d need to match the brain’s overall performance. But this approach fails too. Calculator watches can handily beat the best mathematician at arithmetic, and yet our best robots can’t match a 3-year-old in tying shoelaces. So you can say a system with x bits or MIPS can match the brain at some well-defined task y, such as chess or Go, but that’s different from saying a system with x bits or MIPS can match the brain’s performance as a whole.

And if you did want to build a computer that could match all of the brain’s strengths, you’d have to match all of its weaknesses, too. Airplanes solve flight differently from how birds do, and though they can surpass birds in speed or carrying capacity, they can’t self-heal or reproduce (to Boeing’s delight). What humans pay in brain farts we gain in efficiency—our noggins do everything they do with 20 watts of power. My MacBook Pro uses 100. If you want a computer program to perform identically to a human mind, it would have to run on a human brain. So much for mind uploading.

Ultimately the mind is not a reserve of abstract information but a property of a squishy machine (unless you believe in souls, wormy or otherwise). While bits exist independently of any substrate—you could store them with on and off transistors or with red and blue M&M’s—analog information is bound to its substrate. In effect, it is its substrate. Asking about the information stored in your brain is like asking about the information stored in your car engine. The answer is that a car engine contains exactly one car engine of information. Material matters: You could build a digital simulation of a V8 on your laptop, but it won’t push your Chevy. Would a simulation of a brain think as you do?

The experts I spoke to tended to think there was some ground truth to be estimated—one could theoretically measure the brain in bits or MIPS—but they all conceded that, for now, doing so was to some extent an act of imagination. According to Columbia’s Miller: “I wouldn’t take any particular number too seriously … How to characterize this information remains vague and at best we can get ballpark estimates—which may in fact be wildly off and not in the ballpark at all.” Christof Koch, the president of the Allen Institute for Brain Science: It “totally depends on all sorts of unproven assumptions.” The neurobiologist Paul Roossin: “It’s all so ridiculously crude, and the results are likely off by huge orders of magnitude.” Rodney Douglas, a computational neuroscientist at the Institute of Neuroinformatics, in Switzerland: “Most scientists who use these metrics for comparison of computers and brains are of course aware that they are not entirely appropriate!” Sebastian Seung, a computational neuroscientist at Princeton University and the author of Connectome: “Yes it’s sort of meaningless but we can’t avoid attempting such comparisons anyway :)”

So why should we care if people toss meaningless numbers around? First, it’s misleading about how the brain works. Second, it’s misleading about the progress of artificial intelligence. The New York Times, for instance, once wrote about an artificial neural network called TrueNorth as having “one million ‘neurons’ ” and thus being “about as complex as the brain of a bee.” A bee has about a million neurons, but in making this comparison the Times was implying (among other things) that IBM’s “neurons” were about as complex as a bee’s neurons, which is absurd. We are much further from reproducing a bee’s intelligence than the reader was led to believe. 

So if the brain is so much more complex than a digital computer, how did a computer beat a top human at such a “human” game as Go? That is largely thanks to the challenge Go presents, which, compared to the chaotic world we evolved in, is actually quite simple, with clear rules, few elements, and a well-defined goal. Not many doubted that Go would fall to A.I. eventually. The victory confirms merely that we can build computers that are better at one discrete task than our own brains (as we’ve done before). Harder would be asking a computer to raise a child, or, apparently, emulate a hip 19-year-old on Twitter without denying the Holocaust.

I don’t dare name a task at which digital computers will never best us, but forecasting the timing of such feats remains difficult. As we’ve seen, there is no easy way to calculate, based on hardware specs, when a silicon brain will overpower an organic one. As long as technological advancement derives from human know-how, its pace will be as unpredictable as the messy brilliance of its worm-feeding creators.