Explain It to Me Again, Computer
What if technology makes scientific discoveries that we can’t understand?
Courtesy of the London School of Economics Library/Flickr/Wikipedia
This article arises from Future Tense, a partnership of Slate, the New America Foundation, and Arizona State University. On Feb. 28-March 2, Future Tense will be taking part in Emerge, an annual conference on ASU’s Tempe campus about what the future holds for humans. This year’s theme: the future of truth. Visit the Emerge website to learn more and to get your ticket.
When scientists think about truth, they often think about it in the context of their own work: the ability of scientific ideas to explain our world. These explanations can take many forms. On the simple end, we have basic empirical laws (such as how metals change their conductivity with temperature), in which we fit the world to some sort of experimentally derived curve. On the more complicated and more explanatory end of the scale, we have grand theories for our surroundings. From evolution by natural selection to quantum mechanics and Newton’s law of gravitation, these types of theories can unify a variety of phenomena that we see in the world, describe the mechanisms of the universe beyond what we can see with our own eyes, and yield incredible predictions about how the world should work.
The details of how exactly these theories describe our world—and what constitutes a proper theory—are more properly left to philosophers of science. But adhering to philosophical realism, as many scientists do, implies that we think these theories actually describe our universe and can help us improve our surroundings or create impressive new technologies.
That being said, scientists always understand that our view of the world is in draft form. What we think the world looks like is constantly subject to refinement and even sometimes a complete overhaul. This leads us to what is known by the delightful, if somewhat unwieldy, phrase of pessimistic meta-induction. It’s true that we think we understand the world really well right now, but so has every previous generation, and they also got it wrong. This is why scientists love Karl Popper, who says we can never prove a theory correct, only attempt to overturn it via falsification. So we must never be too optimistic that we are completely correct this time. In other words, we think our theories are true but still subject to potential overhaul. Which sounds a bit odd.
But when properly internalized, this can be wonderfully exciting. A professor of mine once taught a class on a Tuesday, only to read a paper the next day that invalidated what he had taught. So he went into class on Thursday and told the class, “Remember what I told you on Tuesday? It’s wrong. And if that worries you, you need to get out of science.” Science is always in this draft form and this is most clear at the frontier: where scientists work and why they find their inquiry so exciting.
As I discuss in my book The Half-Life of Facts, this is not always a process of completely forward progress, but overall we are improving our view of the world and reducing error in our understanding. This was delightfully encapsulated in a quote by Isaac Asimov: “[W]hen people thought the Earth was flat, they were wrong. When people thought the Earth was spherical, they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together.”
As we have improved our understanding of the shape of our planet, we have overhauled what we thought it looked like, moving from flat to perfectly spherical to an oblate spheroid. And along the way, we have reduced the amount of error in the measurement of our surroundings.
But whether or not science is always moving forward or whether we think we have the final view of how the world works (which we almost certainly do not), we pride ourselves on our ability to understand our universe. Whatever its complexity, we believe that we can write down equations that will articulate the universe in all its grandeur.
But what if this intuition is wrong? What if there are not only practical limits to our ability to understand the laws of nature, but theoretical ones?
On the practical side, it’s unsurprising to recognize that science might move less quickly than it should simply due to the massive size of what we know: A single individual can comb through only so much of the literature. For example, imagine there are two papers somewhere in the literature, one of which says that A implies B, and another that says B implies C. With the incredible growth of the scientific literature, it’s impossible for anyone to be familiar with all of the papers published in all scientific disciplines, let alone the new research in one’s own subfield. So these two papers remain uncombined, until a computer program finds some way to stitch these two ideas together, recognizing that A implies C, a discovery that was practically impossible due to the vast size of the literature.
Samuel Arbesman is a senior scholar at the Kauffman Foundation and a fellow at the Institute for Quantitative Social Science at Harvard University. He is the author of The Half-Life of Facts.