Transport

It Wasn’t Me, Officer! It Was My GPS.

What happens when we blame our navigation devices for our car crashes.

The GPS-assisted crash has become an occasional—and eyebrow-raising—staple of news coverage: Every few months, one hears about a driver faithfully obeying the “turn by turn” instructions of an in-car navigation system, only to find him- or herself in trouble when the actual traffic landscape fails to conform. In one high-profile case, a salesman in a rental car, instructed to make a turn, duly beached his car on a set of commuter train tracks, precipitating an expensive crash. In another episode, a stream of motorists in the United Kingdom—each relying on commands from GPS—were sent into a ford that had risen after heavy rains, noticing neither the water nor the signs warning that the road had been closed. (So much for swarm intelligence.) Drivers have been sent the wrong way on the German autobahn. In Westchester County, N.Y., a spate of trucks striking low-clearance bridges has been blamed on bad GPS information. (The devices failed to note that the roads in question were not truck routes.) And just last week, a teen driver who caused a four-car crash told police he had been “told to take a left” by his GPS. (Of course, he may simply have been trying to shift blame and attention away from his very spotty driving record.)

There are no firm numbers on how often this sort of thing happens; the phenomenon exists somewhere in that hazy area between statistical outlier and burgeoning trend. And certainly the cost of whatever crashes can be attributed to GPS is vastly exceeded by the technology’s utility in helping to reduce crashes, not only by reducing driver confusion and anxiety (fumbling for maps, veering for unexpected turnoffs) but by streamlining journeys (the shorter the trip, the less chance for a crash). The technology also, for me at least, helps ensure harmony in the car: My wife and I no longer argue over directions, but rather cede responsibility—and blame—to a neutral third party.

But it is worth considering GPS-induced crashes both for their implications for driving and for the evolving legal questions of who is to bear responsibility when such incidents happen.

The common response, when reading accounts like those above, is to chastise the lemming drivers for following their satnavs to the letter. There is often a tone of hostile incredulity: How could you not see a sign or a river right in front of you? In The Office, for example, the hapless Michael Scott, engaging in a heated metaphysical argument with Dwight Schrute over the meaning of “turn right,” insists “the machine knows where it is going” and, despite Schrute’s vain protests (“It can’t mean that, there is a lake there!”), drives into that very same lake.

And yet even as we urge drivers not to rely too much on GPS, navigation systems are becoming not only more prevalent but ever more sophisticated, outfitted with “photo realistic” displays that depict complex highway interchanges, “real-time” traffic information, and information about geographical features ranging from important landmarks to the nearest Chili’s. Driving recently through New York City’s Holland Tunnel, I was amused to see, on the Navigon GPS software on my iPhone, a darkened, generic “tunnel” image—almost as if to reassure me that it knew where we were.

Such enhancements to GPS systems are, on the face of it, good. One doesn’t want drivers straining mentally to make sense of the gap between the world they are seeing and the navigational instructions that are being given. More information about actual driving conditions would seem to increase a driver’s sense of situational awareness. But there are two possible problems with these deluxe displays. The first is that drivers, lulled by the richness of the visuals, might begin to focus excessively on this detailed, unscrolling world to the exclusion of other events. An interesting (and cautionary) corollary comes from the world of heads-up displays, which project navigational or other information onto the windshield itself and are theoretically designed to help pilots (or drivers) spend more time looking at what is in front of them, rather than at instrument panels. As Daniel Simons and Christopher Chabris note in their book The Invisible Gorilla, however, experienced pilots flying simulated Boeing 727s completed landings using heads-up guidance without noticing the rather conspicuous presence of a large jet turning onto the runway. As they write, “enhancing your ability to keep your eyes facing forward and to stay on the road takes attention away from another aspect of driving (or flying): your ability to detect unexpected events.” It’s similarly possible to imagine a person using an “augmented reality” app—which overlays virtual mapping information on the actual camera view—on a smartphone overlooking an approaching bus.

The second problem is that ratcheting up the realism requires ever more realistic information. But cataloging the actual human environment is an elusive task. Roads are closed for construction, traffic patterns are changed, left turns are prohibited during certain hours, highway exits temporarily share the same number—to name just a few. I once spent a few hours driving through the suburbs with “geographic information analysts” for NAVTEQ, the traffic information company. They were “building a new territory,” as they described it, trying to map a new suburban infill development that had sprung up; some of the roads, while drivable, had not yet even been named yet. My lasting impression of their work was of its Borgesian quality, a kind of perpetual, shifting struggle between the representation and the actual.

And yet what happens when the world that is depicted is different from, or has not yet caught up to, the external world, and something goes awry? Where does the fault lie? Drivers, one might argue, should never rely entirely on a map—what family vacation hasn’t had its moments of (nonlitigable) high drama, with parents squabbling over a desert shortcut promised by Rand McNally that was washed out in the spring runoff? But there is a difference between glancing at a map for initial guidance (and then relying on signs or the road itself for information) and the new way of navigating, which is to receive authoritative real-time spoken and visual instructions—at a level of granularity measured in meters or feet—as one actually drives.

While it might seem, given the events described in the first paragraph, that the courts should be awash in liability claims on improper GPS guidance (and who hasn’t gotten a bum set of directions?), in fact there seem to be no decisions on record. But a lawsuit recently filed against Google by a pedestrian injured when Google Maps sent her on a walking route without sidewalks may be a portent.

The novelty of the legal terrain is hinted at by the presence of just one article in a law journal exploring evolving GPS liability issues as applied to driving. This is the wonderfully titled “Oops, My GPS Made Me Do It!” written by John E. Woodward and published in the University of Dayton Law Review. One fundamental issue, Woodward notes, is locating the actual source of the problem: Was the miscalculation some fault of the software, the hardware, or a temporary glitch in triangulation owing to a defect on the satellite? And one of the most interesting questions Woodward raises is whether the directions given by a GPS unit represent a product or a service (in which case, product liability claims would not apply).

To answer these questions, he looks back to a number of cases involving aeronautical charts, which “similar to modern GPS automotive products … help guide pilots from point A to point B.” In one representative case, Saloomey v. Jeppesen & Co., the survivors of a plane crash filed suit against Jeppesen, the maker of the chart, for mistakenly indicating that an airfield depicted on the chart was equipped with a complete instrument-landing system (to enable pilots to land using only instruments). “Tragically,” writes Woodward, “Jeppesen’s area chart was incorrect and the airfield was only equipped with a system that would indicate whether the pilot was on the proper flight path.” As a result, the pilot misestimated the altitude of his approach, crashing into a ridge. The 2nd Circuit Court of Appeals, finding in favor of liability—and ruling that Jeppesen’s charts were a product—wrote: “By publishing and selling the charts, Jeppesen undertook a special responsibility, as seller, to insure that consumers will not be injured by the use of the charts.”

Despite warnings (see this Garmin page, with its roster of disclaimers, for example) on GPS units, it does not seem outlandish to think this court’s reasoning might apply to navigation systems. One potential out for the defendant, notes Woodward, is “comparative fault”—i.e., the extent to which a driver shares responsibility in the crash. “The manufacturer will have a better case for applying comparative fault,” he writes, “when it is obvious that the end-user deviated from what a reasonable person in the end-user’s position would have done. For example, if there was an abundance of signage indicating the name of the street and the street’s one-way status, the end-user will more than likely have some comparative fault leveled against him or her.” Another approach, as attorney Peter Neger argues in Law Technology News, is for manufacturers to argue “that disregarding visual cues like railroad tracks in favor of blind reliance on the GPS navigational device constitutes misuse of the product and thereby voids the limited warranty.” (Then again, it’s easy to imagine a successful lawsuit in which a driver has turned onto train tracks because a traffic sign is missing, even though he or she conceivably should still have seen the tracks.)

The legal implications of navigation-system error are interesting for one other reason: their relevance to another, related, set of impending legal and regulatory issues raised by the new spate of driver-assist technologies and even the prospect of fully driverless, automated (or “autonomous”) driving. As the Rand Corp. has noted, “as of today, no regulations exist for autonomous vehicle technologies,” even though the marketplace is filled with such technologies, ranging from Volvo’s City Safety (which promises low-speed emergency braking when the system detects a hazard) to the “attention assist” and “automatic emergency braking” systems found in Mercedes-Benz’s E-class cars. It’s easy to imagine a legal claim from a driver who is seriously injured when he rear-ends a car that had come to a sudden stop on a highway if, for some reason, his automatic emergency braking had failed to engage. But as the striking vehicle in a rear-end crash typically assumes liability—precisely because the standard of prudent driving is being able to stop in time if the driver ahead does come to a sudden stop—would a company be able to demonstrate “comparative fault”? Would rulings in favor of the plaintiffs in such cases have a chilling effect on innovation in safety technologies? (As the Rand report notes, auto manufacturers resisted air bags, in part, because they worried about shifting legal responsibility in crashes from drivers to themselves.)

As Rand points out, we define negligent behavior against a standard of what counts as unreasonable behavior—a definition that may evolve as computer-assisted driving becomes more widespread. And the car manufacturers may have to walk something of a high wire. “Automakers,” writes Rand, “will want to preserve the social norm that crashes are primarily the moral and legal responsibility of the driver, both to minimize their own liability and to ensure safety.” And yet automakers, to sell their vehicles, turn toward increasingly sophisticated safety technology, sold with the implicit promise that the car will make such crashes vanishingly rare events.

Become a fan of  Slate  on Facebook. Follow us on  Twitter.