This article arises from Future Tense,a collaboration among Arizona State University, the New America Foundation, and Slate.
Over the past 20 years, cultural psychology research has confirmed what pop-culture purveyors and marketers have long suspected: that what defines American culture is an emphasis on independence, autonomy, and choice. We belt out songs like Frank Sinatra's "My Way" and patronize restaurants with slogans like, "Have it your way, right away." While we embrace our unalienable rights, we abhor the value that other cultures place on collective conformity and obedience to authority. After the 2008 Beijing Olympic Game opening ceremony, for example, Americans collected stories about how the Chinese participants in the ceremony were expected to sacrifice for the collective, wearing adult diapers and enduring heatstroke so the nationalistic show could go on.
This American emphasis on the individual's sovereignty poses a problem for new technologies designed precisely to deny personal agency. Autonomous technological agents—from military drones to the self-driving car—are increasingly prevalent. Their potential benefits and conveniences are immense. Yet as the currently cutting-edge becomes commonplace, these technologies could bump up against the prized American autonomy.
The United States drives more than any other society, and the self-driving car provides the glorious possibility of a hands-free cross-country road trip. But how will it harmonize with American drivers' varied preferences for tailgating, conscientious speed-limit-monitoring, passive aggression toward walkway pedestrians, or highway-traversing pursuits of the fastest lane? General Motors, Volkswagen, Audi, BMW, Volvo, and Google are each currently testing driverless cars, with intentions to make the vehicle available for mass consumption by 2018. Recently, Nevada became the first state to pass legislation asking the Department of Motor Vehicles to formulate guidelines for driverless cars.
While engineers are perfecting the technology, they still must grapple with the drivers, who must both trust and enjoy the automated-car experience. Making a driver-free car safe and effective requires overlooking the uniqueness of each individual's driver personality. Research suggests that autonomous technological agents like service robots and anthropomorphic computer interfaces can diminish users' experiences of control. And we hate to give up control.
This conflict is distinct from two more common technology-related scares: that technology is diminishing our basic intelligence and social skills; and that humans will become enslaved by robot overlords. The first is rooted in a romanticized nostalgia for simplicity. This fear, which Nicholas Carr expressed in both a 2008 Atlantic article and the recent book The Shallows, forgets that previous technological advances such as the telephone or printing press did not somehow set Homo sapiens back several evolutionary steps. The second is rooted in science fiction and represents more of an egotistical, anthropocentric concern about humans' place atop the hierarchy of agents. Neither of these concerns is as legitimate a problem as the possibility that autonomous technology might simply piss people off.
We may think we want convenience, but that feeling may change to frustration when this convenience comes at the expense of individual expression and choice. Drivers reserve the right to choose the route they take to their destination, the lane in which they drive, the speed at which they travel, and their level of safety awareness—all decisions that an autonomous car might make instead, as the "driver" becomes more of a passenger. Getting stick-shift diehards to surrender their fighter-pilot-level mastery of the controls and adopt the autonomous car will probably be even more difficult. The consequences of not recognizing the conflict between convenience and driver autonomy could mean monumental amounts of money, time, and resources wasted on this effort, if consumers are hesitant to adopt it.