Bitwise

The Theory of Everything and Then Some

In complexity theory, physicists try to understand economics while sociologists think like biologists. Can they bring us any closer to universal knowledge?

complexity theory.
John H. Miller posits that any large network of interacting pieces, whether an anthill or a nation, could perform the sort of work that we think of a brain as doing.

Image by agsandrew/Thinkstock

The world has gotten a lot smaller over the past century, but the store of knowledge has become unfathomably large. One way to think about it: Last week, I was able to fly across the country in five hours while carrying 10,000 PDFs on my laptop. In his new book A Crude Look at the Whole: The Science of Complex Systems in Business, Life, and Society, complexity theorist John H. Miller puts it this way: “Science has proceeded by developing increasingly detailed maps of decreasingly small phenomena.” The rise of complexity theory, an interdisciplinary field studying the emergent behavior and patterns of the interactions of simple (and not so simple) components, has been one of the most important responses to this ballooning of knowledge, which in 1964 Stanislaw Lem called the “megabyte bomb.” That term may have seemed scary in its time; now it just sounds hilariously and anachronistically small.

As befits a field that Carnegie Mellon scholar Cosma Shalizi said could also be termed “neat nonlinear nonsense,” complexity theory originates less from a particular field of science than from a particular sort of personality—the restless scientist who does not discount the humanities, the kind who follows Jacob Burckhardt’s dictum that a person “should be an amateur at as many points as possible, privately at any rate, for the increase of his own knowledge and the enrichment of his possible standpoints.” (Not for nothing does Miller quote both Thomas Pynchon and James Joyce in his book.) This sort of personality doesn’t settle on a single worldview that it finds definitive, but struggles to draw inexact—sometimes very inexact—analogies between fields as different as slime mold biology, cellular automata, and free-market economics. Indeed, the famously restless mathematician Stanislaw Ulam, who worked on the Manhattan Project and invented the ubiquitous Monte Carlo method of solving problems through statistical sampling, once said that artificial intelligence could only succeed if it were to encompass analogy as a fundamental form of thinking. The influence of complexity theory, consequently, has been hard to quantify, because its impacts are spread among other fields, rather than in its own cordoned-off territory.

In the centuries since the Renaissance and the Enlightenment, there grew the conceit that the universe could be entirely described and even predicted. The most famous embodiment of this idea is Laplace’s demon, the French scientist Pierre-Simon Laplace’s conception of an intelligent entity that, knowing the exact location and inertia of every particle in the universe, could calculate the entirety of the past and future. Laplace described a deterministic, ordered universe: “We may regard the present state of the universe as the effect of its past and the cause of its future.”

Science didn’t bear out this vision. Nondeterministic phenomena like quantum mechanics and deterministic-yet-chaotic systems like the weather hit the brakes hard on our journey toward omniscience. Restless scientists such as Murray Gell-Mann, Gian-Carlo Rota, John H. Holland, Heinz Pagels, and Melanie Mitchell gathered at the Santa Fe Institute (founded in 1984) to put forth a vision of an interdisciplinary framework to go beyond the walled-off professional microdomains that dominated academic research then and now. Encompassing everything from biology to physics to economics to sociology, complexity theory provides not a single philosophy but a looser toolset of themes and techniques with which to grasp, in the Gell-Mann quote that gives Miller’s book its title, “a crude look at the whole.” In Miller’s account, complexity theory is the antidote to reductionism, the idea that complete knowledge of some fundamental field, like physics or biology, is enough to capture the entirety of all other sciences; that was the thesis of Edward O. Wilson’s Consilience, though the basic idea goes back as far as humanity’s ego does, which is to say millennia. Miller begins with a very concrete and resonant example: The algorithms that drove high-frequency trading programs in no way gave any indication that, in the context of the financial markets and in real-time interaction with one another and other trading entities, they could cause a destabilizing “flash crash” in the e-mini futures market and a tsunami of other consequences, as they did on May 6, 2010—nor that this tsunami could have been stopped and the market stabilized merely by halting trading for a few minutes. Knowledge of the pieces, Miller says, is not enough.

In particular, complexity theory is concerned with the idea of emergence, or the notion that “simple local rules can have complex global implications.” “We inhabit a world where even the simplest parts can interact in complex ways,” Miller writes, “and in so doing create an emerging whole that exhibits behavior seemingly disconnected from its humble origins. By its very nature, emergent behavior is easy to anticipate but hard to predict.” That these global indications are mostly reliable and predictable—that our complex global network of markets, supply chains, and institutions continues to function in the absence of a centralized controller—seems miraculous once you look at it, which makes it all the more scarier when the right (or wrong) chain of conditions sets off a destabilization like the flash crash. Miller’s book portrays two basic sides to complexity theory: first, how to understand and control complex systems to our benefit in order to minimize negative emergent behavior; and second, how the principles of complexity can generate positive (even intelligent) emergent behavior out of far simpler base materials, such as slime molds and honeybee hives—or even brain neurons.

The book covers a lot of ground very quickly, which can occasionally be dizzying and unsatisfying. This is something of an intrinsic consequence of the field. Because complexity is about a set of abstract ideas, models, and tools rather than a narrow model or approach, complexity theory best shows its strengths through a fencepost display of how the same tools can apply in drastically different situations. (This is a problem I face in my own writing on information and algorithms, where readers need concrete examples even though many coders are equally happy working on nearly anything, because the same abstract problems keep reoccurring.) These abstract ideas, which include feedback, heterogeneity, randomness, decentralization, scaling laws, and self-organized criticality, are like thermodynamic entropy, hovering somewhere between hard laws of nature like Newton’s and generalized patterns of observation, such as, “Isn’t it amazing how free markets often stabilize instead of spontaneously collapsing?” Figuring out exactly where these patterns of complexity lie on that continuum is one of the complexity theorist’s greatest challenges. As Miller writes, “There is power in being able to develop generalized insights across broad domains, even if these efforts sometimes fail.”

One of Miller’s strongest examples is his application of nonlinear search algorithms to chemotherapy cocktail design. As both the amount of medical data in an individual diagnosis increases and the number of treatment options balloon, complexity-based methods may become necessary tools in reducing an intractable number of combinations down to an optimized selection in the treatment of diseases such as AIDS and cancer that do not respond to a single drug.

But complexity theory’s drive to unify disparate phenomena under a single umbrella can become speculative. Miller writes:

We can (we hope) apply the same general principles we’ve uncovered at one level to another. Perhaps the ways that interacting atoms create molecular behavior, interacting molecules create chemical behavior, interacting chemicals create neuron behavior, interacting neurons create individual behavior, interacting individuals create colony behavior, and interacting colonies create ecosystem behavior are all governed by similar principles.

In these examples, Miller illustrates the fundamental gamble of complexity theory, which is the extent to which it can pin down these principles. Taking these disparate cases, the same general mechanisms may apply, but it’s unclear to what extent they actually reflect the same underlying operational principles, as opposed to simply describing sufficiently general constraints under which a system operates (such as “responding to external conditions” or “correcting errors” or “goal-directed behavior”), be it a hive or a brain.

For example, there is the continuing controversy over “scale-free networks,” whose data fit a particular kind of “long tail” distribution called a power law, in which the proportion between two respective pieces of data repeats itself at different scales—analogous to a fractal that repeats the same structure at various scales. Some researchers have claimed to find power-law relationships in everything from website popularity to city growth to species variation, to the extent that some see power laws as some mystical force unifying a “self-organizing” universe. This is, unfortunately, closer to numerology than science; in his book The Nonlinear World, Yoshitsugu Oono quotes his adviser Tomo-o Oyama wryly saying, “Log-log plot [of power law relationships] is a standard trick to hide error bars.” Yet such observable mathematical relationships do exist. Miller describes how linguist George Zipf discovered a power law for word frequency with an exponent of negative 1: “So the word that is the second most commonly used in a text will occur about half as often as the word that’s most commonly used. The third-most-common word occurs one-third as often, and so on. This relationship holds across a variety of languages (including languages that are randomly generated).” But when Miller cites Lewis Fry Richardson’s work on deaths in warfare and describes another power law—“we find that the number of wars is proportionate to the number of deaths raised to roughly the −½ power”—that’s roughly true but far less illuminating or conclusive. And as Aaron Clauset, Cosma Shalizi, and M.E.J. Newman found, Zipf’s finding is a rare case of a power law truly fitting the data:

There is only one case—the distribution of the frequencies of occurrence of words in English text—in which the power law appears to be truly convincing, in the sense that it is an excellent fit to the data and none of the alternatives carries any weight … the distributions for birds, books, cities, religions, wars, citations, papers, proteins, and terrorism are plausible power laws, but they are also plausible log-normals and stretched exponentials.

Miller leaves readers uncertain of just how universal complexity models really are, whereas a book such as Melanie Mitchell’s excellent (and more cautious) Complexity: A Guided Tour is more forthright and sober about the doubts that have been raised. The overuse of power law models is an example of complexity theory not entertaining enough complexity.

Miller’s whirlwind tour is generally well-grounded; he stresses complexity theory as an approach and a complementary way of thinking, not as a cure-all. Sometimes, however, Miller’s reach just exceeds his grasp, no more so than when he examines the human brain and declares that there’s no “there” there. “We likely are surrounded by brains everywhere,” he writes, “some of which we easily recognize and admire and others of which we are only beginning to understand.” There’s nothing special about the network of the brain, Miller is saying; any large network of interacting pieces, whether an anthill or a nation, could perform the sort of work that we think of a brain as doing. Perhaps this is true, but it’s a big perhaps. The problem is that there just isn’t anything else as outright complicated as the human brain, with its 100 billion neurons and its 100 trillion synapses, so we have no idea whether its emergent properties resemble those of networks with vastly fewer components (that is, pretty much all other networks), or if some new sort of magic happens once you get up into sufficiently high numbers, just as machine learning is now revealing new potential only as it attains new heights of computation and data.

The book ends on a strong note with an overview of Markov chain Monte Carlo (or MCMC) algorithms, which are ingeniously used to tame beastly machine-learning algorithms that would otherwise be far too time-consuming to evaluate. As big data networks multiply, approaches for trying to constrain and comprehend the vast swaths of information at our disposal will need to become more general and interdisciplinary; they need to adapt so we don’t need to manually start from scratch in each new domain. Just as complexity theory has its origins in a conscious bucking of the trend toward specialization, its strengths and weaknesses are the opposite of what one finds in most fields. At its best, complexity theory is the science of imperfection, ignorance, accident, and error. Its goal is to constrain and exploit these demons for our advantage. Through the selective, careful incorporation of randomness and noise into models of problems that would otherwise be too complex to solve, it seeks to improve, but not perfect, our handling of these problems. At its worst, complexity theory spills into mysticism about the hidden universal patterns of nature and society, which has sometimes made the field an unfortunate handmaiden to both postmodern jargon and business doublespeak (which are more closely related than you might think). Miller mostly sticks to the best that complexity has to offer, but his book still demonstrates complexity theory’s strengths and weaknesses—as well as how closely they are coupled. Miller writes, “The ultimate hope in the science of complex systems is that honeybee hives, financial markets, and brains are deeply connected—or, for that matter, not all that different from other biological organisms, cities, companies, political systems, computer networks, and on and on. A honeybee swarm may just be a more easily observed instance of a brain.” I would express the ambitions of complexity theory more modestly: Even if these things are not deeply connected or deeply similar, a more generalized toolset can still prove just as useful to solving domain-specific problems as the localized toolset that’s evolved within a field. If all these things were as similar as Miller hopes, complexity really would be too simple.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.