Explain It to Me Again, Computer
What if technology makes scientific discoveries that we can’t understand?
These sorts of limits are exciting because we can construct algorithms to help us with these kinds of problems, in which we become able to discover in partnership with machines. But once shown such a computationally discovered insight, we readily can grasp its meaning and the explanatory power it can provide.
But what if it were possible to create discoveries that no human being can ever understand? For example, if I were to give you a set of differential equations, while we have numerical and computational methods of handling these equations, not only could it be difficult to solve them mathematically, but there is a decent chance that no analytical solution even exists.
So what of this? Does such a hint of non-understandable pieces of reasoning and thought mean that eventually there will be answers to the riddle of the universe that are going to be too complicated for us to understand, answers that machines can spit out but we cannot grasp? Quite possibly. We’ve already come close. A computer program known as Eureqa that was designed to find patterns and meaning in large datasets not only has recapitulated fundamental laws of physics but has also found explanatory equations that no one really understands. And certain mathematical theorems have been proven by computers, and no one person actually understands the complete proofs, though we know that they are correct. As the mathematician Steven Strogatz has argued, these could be harbingers of an “end of insight.” We had a wonderful several-hundred-year run of explanatory insight, beginning with the dawn of the Scientific Revolution, but maybe that period is drawing to a close.
So what does this all mean for the future of truth? Is it possible for something to be true but not understandable? I think so, but I don’t think that that is a bad thing. Just as certain mathematical theorems have been proven by computers, and we can trust them, we can also at the same time endeavor to try to create more elegantly constructed, human-understandable, versions of these proofs. Just because something is true, doesn’t mean that we can’t continue to explore it, even if we don’t understand every aspect.
But even if we can’t do this—and we have truly bumped up against our constraints—our limits shouldn’t worry us too much. The non-understandability of science is coming, in certain places and small bits at a time. We’ve grasped the low-hanging fruit of understandability and explanatory elegance, and what’s left might be possible to be exploited, but not necessarily completely understood. That’s going to be tough to stomach, but the sooner we accept this the better we have a chance of allowing society to appreciate how far we’ve come and apply non-understandable truths to our technologies and creations.
As I’ve argued, if it’s our machines doing the discovering, we can still have naches—we can take an often vicarious pride and joy in the success of our progeny. We made these machines, so their discoveries are at least partly due to humanity. And that’s exciting, as these programs of the future begin to uncover new truths about the universe.
They may just inject a bit more mystery into the world than we might have bargained for.
Samuel Arbesman is a senior scholar at the Kauffman Foundation and a fellow at the Institute for Quantitative Social Science at Harvard University. He is the author of The Half-Life of Facts.