Future Tense

Choose at Your Own Risk

How technology is changing our choices and the values that help us make them.

Technologies and the problem of choices.
An ethical minefield.

Photo illustration by Slate. Photo by Thinkstock

You’re sitting at your desk, with the order in front of you that will enable the use of autonomous lethal robots in your upcoming battle. If you sign it, you will probably save many of your soldiers’ lives, and likely reduce civilian casualties. On the other hand, things could go wrong, badly wrong, and having used the technology once, your country will have little ground to argue that others, perhaps less responsible, should unleash the same technology. What do you choose? And what values inform your choice?

We like to think we have choices in all matters, and that we exercise values in making our choices. This is perhaps particularly true with regard to new technologies, from incremental ones that build on existing products, such as new apps, to more fundamental technologies such as lethal autonomous robots for military and security operations. But in reality we understand that our choices are always significantly limited, and that our values shift over time in unpredictable ways. This is especially true with emerging technologies, where values that may lead one society to reject a technology are seldom universal, meaning that the technology is simply developed and deployed elsewhere. In a world where technology is a major source of status and power, that usually means the society rejecting technology has, in fact, chosen to slide down the league tables. (Europe may be one example.)

Take choice. To say that one has a choice implies, among other things, that one has the power to make a selection among options, and that one understands the implications of that selection. Obviously, reality and existing systems significantly bound whatever options might be available. In 1950, I could not have chosen a mobile phone: They didn’t exist. Today, I can’t choose a hydrogen car: While one could certainly be built, the infrastructure to provide me with hydrogen so I could use that vehicle as a car in the real world doesn’t exist. Before the Sony Walkman, for example, no one would have said they needed a Walkman, because they had no idea it was an option.

Moreover, in the real world no choice is made with full information. This is true at the individual level; if I could choose my stocks with full information, I would be rich. It is also true at the level of the firm, and governments: Investments and policies are formulated on best available information, because otherwise uncertainty becomes paralysis, and that carries its own cost.

Values, the principles we live by, are equally complicated. They arise from many different sources: one’s parents, cultural context, experiences, and contemplation of what constitutes meaning and the good life. Professions such as engineering, law, the military, and medicine all have formal and codified statements of ethics and values. In general, values systems tend to have two fundamental bases: absolute rules, such as the Ten Commandments of Christianity, or the goal of enhancing individual and social welfare, such as utilitarianism. In practice, most people mix and match their frameworks, adapting the broad generalizations of their ethical systems to the circumstances and issues of a particular decision.

Both choices and values rely on incomplete, but adequate, information. With inadequate information, choice may exist, but it is meaningless, because you don’t know enough about what you are choosing to make a rational differentiation among options, and because the results of the choice are unknowable anyway. Without information, values become exercises in meaningless ritual. The Sixth Commandment is, after all, thou shalt not kill, but people do so all the time. Sometimes it is in a military operation, a form of killing that, done within the constraints of international law and norms, is generally considered lawful and ethical; sometimes it is the state responding to a criminal act with capital punishment; sometimes it is considered not justified and thus murder. It is the context—the information surrounding the situation in which the killing occurs—that determines whether or not the absolute rule applies. And utilitarianism obviously requires some information about the predictable outcomes of the decision, because otherwise how do you know that on balance you’ve improved the situation?

We are used to a rate of change that, even if it does make choices and values somewhat contingent, at least allows for reasonable stability. Today, however, rapid and accelerating technological change, especially in the foundational technological systems known as the Five Horsemen—nanotechnology, biotechnology, information and communication technology, robotics, and applied cognitive science—has overthrown that stability. The cycle time of technology innovations and the rapidity with which they ripple through society have become far faster than the institutions and mechanisms that we’ve traditionally relied on to inform and enforce our choices and values.

The results so far aren’t pretty. One popular approach, for example, is the precautionary principle. As stated in the U.N. World Charter for Nature, this requires that (Section II, para. 11(b)) “proponents [of a new technology] shall demonstrate that expected benefits outweigh potential damage to nature, and where potential adverse effects are not fully understood, the activities should not proceed.” Since no one can predict the future evolution and systemic effects of even a relatively trivial technology, it is impossible in any real-world situation to understand either the adverse or positive effects of introducing such a technology. Accordingly, this amounts to the (unrealistic) demand that technology simply stop. Some activists, of course, avoid the intermediate step and simply demand that technologies stop: This is the approach many European greens have taken to agricultural genetically modified organisms and some Americans have taken toward stem cell research. The environmental group ETC has called for “a moratorium on the environmental release or commercial use of nanomaterials, a ban on self-assembling nanomaterials and on patents on nanoscale technologies.”

While on the surface these may appear to be supremely ethical positions, they are arguably simply evasions of the responsibility for ethical choice and values. It is romantic and simple to generate dystopian futures and then demand the world stop lest they occur. It is far more difficult to make the institutional and personal adjustments that choice and values require in the increasingly complex, rapidly changing world within which we find ourselves. It is romantic and simple to demand bans and similar extreme positions given that, in a world with many countries and cultural groups striving for dominance, there is little if any real chance of banning emerging technologies, especially if they provide a significant economic, technological, military, or security benefit.

What is needed, then, is some way to more effectively assert meaningful choice and responsible values regarding emerging technologies. There are at least two ways that are already used by institutions that face both “known unknowns” and “unknown unknowns”: scenarios, and real-time dialog with rapidly changing systems. Scenarios, which are not predictions of what might happen, are like war games: They provide valuable experience in adaptation, as well as opportunities to think about what might happen. An interesting additional form of scenario is to engage science fiction in this regard. Good science fiction often takes complex emerging technologies and explores their implications in ways that might be fictional, but can lead to much deeper understanding of real-world systems.

Real-time dialog means that institutions that must deal with unpredictable, complex adaptive systems know that there is no substitute for monitoring and responding to the changes that are thrown up by complex systems as they occur: Responsible and effective management requires a process of constant interaction. Such a process is, of course, only half the battle; the other is to design institutions that are robust enough, but agile enough, to use real-time information to make necessary changes rapidly. What is ethically responsible is not just fixation on rules or outcomes. Rather, it is to focus on the process and the institutions involved by making sure that there is a transparent and workable mechanism for observing and understanding the technology system as it evolves, and that relevant institutions are able to respond to what is learned rapidly and effectively.

It is premature to say that we understand how to implement meaningful choice and responsible values when it comes to emerging technologies. Indeed, much of what we do today is naive and superficial, steeped in reflexive ideologies and overly rigid worldviews. But the good news is that we do know how to do better, and some of the steps we should take. It is, of course, a choice based on the values we hold as to whether we do so.

On Friday, March 6, Arizona State University will host “Emerge: The Future of Choices and Values,” a festival of the future, in Tempe. For more information, visit emerge.asu.edu. Future Tense is a partnership of Slate, New America, and ASU