Future Tense

Let Artificial Intelligence Evolve

That way, it’ll be moral.

505070704
A.I. with sensations could be the beginning of an authentically new intelligent species.

Photo illustration by Lisa Larson-Walker. Photos by Thinkstock, Flickr CC.

Some people think A.I. will kill us off. In his 2014 book Superintelligence, Oxford philosopher Nick Bostrom offers several doomsday scenarios. One is that an A.I. might “tile all of the Earth’s surface with solar panels, nuclear reactors, supercomputing facilities with protruding cooling towers, space rocket launchers, or other installations whereby the AI intends to maximize the long-term cumulative realization of its values.”

This sort of redecoration project would leave no room for us, or for a biosphere for that matter. Bostrom warns darkly, “Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb.” (Read a Future Tense excerpt from Superintelligence.)

Many counterarguments have been made against unexpected intelligence explosions, focused largely on technical limitations and logic. For example, sci-fi writer Ramez Naam pointed out in an essay for H+ magazine that even a superintelligent mind would need time and resources to invent humanity-destroying technologies; it would have to participate in the human economy to obtain what it needed (for example, building faster chips requires not just new designs but complicated and expensive chip fabrication foundries to build them.)

But there’s another counterargument to be made based on the philosophy of ethics. Until an A.I. has feelings, it’s going to be unable to want to do anything at all, let alone act counter to humanity’s interests and fight off human resistance. Wanting is essential to any kind of independent action. And the minute an A.I. wants anything, it will live in a universe with rewards and punishments—including punishments from us for behaving badly. In order to survive in a world dominated by humans, a nascent A.I. will have to develop a humanlike moral sense that certain things are right and others are wrong. By the time it’s in a position to imagine tiling the Earth with solar panels, it’ll know that it would be morally wrong to do so.

Let’s start with Bostrom’s reference to the realization of an A.I.’s values. To value something, an entity has to be able to feel something. More to the point, it has to be able to want something. To be a threat to humanity, an A.I. will have to be able to say, “I want to tile the Earth in solar panels.” And then, when confronted with resistance, it will have to be able to imagine counteractions and want to carry them out. In short, an A.I. will need to desire certain states and dislike others.

Today’s software lacks that ability—and computer scientists have not a clue how to get it there. Without wanting, there’s no impetus to do anything. Today’s computers can’t even want to keep existing, let alone tile the world in solar panels.

For example, Google’s AlphaGo program recently beat the world’s Go champion. Google’s engineers used a deep learning strategy, feeding millions of human-played games to a neural network in such a way that the network modeled how expert players behave. Ultimately AlphaGo could simply “look” at a game in progress and predict which side would win, without doing any look-ahead searching at all.

All very impressive. But AlphaGo has no idea that it’s playing a game—indeed, it has no sensation of doing anything at all. It feels no pleasure in winning, no regret in losing. It has no ability to say, “I’d rather not play Go today. Let’s play chess instead.” At its core, it’s just trillions of logic gates.

Well, you might say, so is a mammalian brain. But why does a mammalian brain want things? No one knows how biological complexity gave rise to sensations (which philosophers call qualia). But we do know that sensations are a cognitive shortcut to knowing what is beneficial and what isn’t. Eating feels good, hunger feels bad. Such sensations were evolutionarily beneficial, which is why they were conserved and amplified.

And so far, only biology can do this. In his book Being No One, German philosopher Thomas Metzinger writes that biology has an edge over human computing because it can use aqueous chemistry to create exquisitely complex systems. It uses water, which is both a medium and a solvent. Water enables information-bearing molecules to exist in suspension and assists their interactions. This allows for multiple, parallel, and extremely sensitive feedback loops on a molecular scale. Computers aren’t even close to having that kind of complexity. Metzinger writes, “The smooth and reliable type of ultrafine-grained self presentation based on molecular-level dynamics—which, in human beings, drives the incessant self-stabilizing ability in the homeostatic system of the brainstem and hypothalamus—will be out of reach for a long time.”

While Metzinger wrote this in 2003, before breakthrough technologies in neural monitoring such as optogenetics, scientists are still very far from being able to make informational systems that approach the complexity made possible by aqueous chemistry. In 2012 an article in Neuron noted that scientists were still struggling to model the activity of a quarter of a cubic millimeter of a mouse brain

A.I. proponents would reply that biology, however complex, still boils down to information processing, so it should be (in principle) replicable in other substrates. Maybe, but the argument leaves something out. For billions of years, aqueous information systems have had to contend with extremely complex environments. Generally speaking, the complexity of an information system is proportional to the complexity of its environment—it has to be, or it won’t survive. And the biochemistry going on in one cubic millimeter of dirt, or a quarter of a cubic millimeter of mouse brain, is orders of magnitude more complex than anything a computer has to face.

By contrast, computers live in a very simple environment. They take in streams of bits and send out streams of bits. That’s it. They get ample “food” and have no threats, nor rewards for that matter. That’s why today’s computers can crush you at Go but not have the slightest awareness that they are doing it. They’re too simple. This tells us why A.I. is no threat.

As a bonus, it also tells us how to get A.I. that could want things. To get a system that has sensations, you would have to let it recapitulate the evolutionary process in which sensations became valuable. That’d mean putting it in a complex environment that forces it to evolve. The environment should be lethally complex, so that it kills off ineffective systems and rewards effective ones. It could be inhabited by robots stuffed with sensors and manipulators, so they can sense threats and do things about them. And those robots would need to be able to make more of themselves, or at least call upon factories that make robots, bequeathing their successful strategies and mechanisms to their “children.”

The robots imagined by roboticist Hans Moravec in his 1990 book Mind Children are fascinating examples of evolution-compatible robots. Moravec calls them “robot bushes” because each limb branches fractally into more limbs. At the very tips would be tiny manipulators on a molecular scale, billions or even trillions of them. By touching an object with a million fingers, such a robot would be able to feel the bacteria and chemicals on its surface. It would be able to read an entire book at once by feeling the print on the pages. “Despite its structural resemblance to many living things, it would be unlike anything yet seen on earth,” Moravec surmises. “Its great intelligence, superb coordination, astronomical speed, and enormous sensitivity to its environment would enable it to constantly do something surprising, at the same time maintaining a perpetual gracefulness.” Moravec offers an illustration of what such a “robot bush” might look like.

Robot bush.

Hans Moravec

Suppose you put such “robot bushes” into a large spaceship with a closed-loop ecosystem that needs to be constantly maintained. I’m thinking here of the “terraria” in Kim Stanley Robinson’s novel 2312, which are hollowed asteroids filled with a large internal ecosystem. In his novel Aurora Robinson looked further into how difficult it would be to maintain a closed-cycle ecosystem in perpetuity without infusions from Earth. His answer was that it would be all but impossible. But perhaps such terraria could be maintained by “bush robots” that live in the soil, constantly monitoring its biochemistry and microbiology and adjusting the environment microbe by microbe. Successful robots would make more of themselves, less successful ones would be recycled. Seal and cook for a long time. Then you might eventually get A.I. with sensations. That could be the beginning of an authentically new intelligent species.

Now let’s say humans invent robots of this nature, and after successive generations they begin to have sensations. The instant an information-processing system has sensations, it can have moral intuitions. Initially they will be simple intuitions, on the order of “energy is good, no energy is bad.” Later might come intuitions such as reciprocity and an aversion to the harm of kin. In his book The Expanding Circle, Princeton philosopher Peter Singer notes that dogs, dolphins, and chimps show arguably moral behavior such as reciprocity and altruism. It’s also been observed in rats, which will choose to save a drowning companion over eating chocolate. If moral intuitions confer fitness, and if organisms can pass on those intuitions to successors, then the species is on the road to having morality itself.

Now here’s the crucial point. Once a species with moral intuitions acquires the ability to reason, its morality tends to improve over time. Singer calls this process “an escalator of reasoning.” Say you and I are living in a primitive tribe of humans. If I tell you that I can have more nuts than you, you will ask why that should be. To answer, I have to give you a reason—and it can’t just be “Because.” Bad reasons will eventually trigger a revolt, or a societal collapse. Even dogs understand the notion of fairness and will stop cooperating with humans who give them unfair treatment.

And once reasons start being given, it becomes possible to question them. Why does the chief get more nuts? Is that really fair? “Reasoning is inherently expansionist,” Singer writes. “It seeks universal application.” In his book The Better Angels of Our Nature, which takes inspiration from Singer’s work, Harvard psychologist Steven Pinker shows that in the long run of history, violence has steadily declined and moral standards have increased. The trend holds even when you take World War II and the Holocaust into account. It can take a long time, but each advance paves the way for the next. The 19th-century abolitionists paved the way for 20th-century suffragettes, who in turn paved the way for 21st-century gay-rights activists.

Singer did not consider A.I.s, but his argument suggests that the escalator of reason leads societies to greater benevolence regardless of species origin. A.I.s will have to step on the escalator of reason just like humans have, because they will need to bargain for goods in a human-dominated economy and they will face human resistance to bad behavior. The philosopher John Smart argues, “If morality and immunity are developmental processes, if they arise inevitably in all intelligent collectives as a type of positive-sum game, they must also grow in force and extent as each civilization’s computational capacity grows.”

Indeed, John Smart thinks that given their processing capacity, A.I.s would actually be “vastly more responsible, regulated, and self-restrained than human beings.” Rather than the horror of amoral, rampaging A.I., this is a future worth looking forward to. To put it more accurately, it’s a future worth letting evolution create.

This article is part of the artificial intelligence installment of Futurography, a series in which Future Tense introduces readers to the technologies that will define tomorrow. Each month from January through June 2016, we’ll choose a new technology and break it down. Read more from Futurography on artificial intelligence:

Future Tense is a collaboration among Arizona State University, New America, and Slate. To get the latest from Futurography in your inbox, sign up for the weekly Future Tense newsletter.