Future Tense

Algorithms Aren’t Like Spock

They aren’t purely rational. Instead, they’re like Capt. Kirk.

ups driver.
UPS has invested hundreds of millions of dollars in an algorithm called ORION that bases its decisions in part on state-of-the-art traveling salesman math.

Paul J. Richards/Thinkstock

When technologists describe their hotshot new system for trading stocks or driving cars, the algorithm at its heart always seems to emerge from a magical realm of Spock-like rationality and mathematical perfection. Algorithms can save lives or make money, the argument goes, because they are built on the foundations of mathematics: logical rigor, conceptual clarity, and utter consistency. Math is perfect, right? And algorithms are made out of math.

In reality algorithms have to run on actual servers, using code that sometimes breaks, crunching data that’s frequently unreliable. There is an implementation gap between what we imagine algorithms do in a perfect computational universe and all the compromises, assumptions, and workarounds that need to happen before the code actually works at scale. Computation has done all sorts of incredible things, sometimes appearing both easy and infallible. But it takes hundreds or thousands of servers working in tandem to do something as straightforward as answer a search engine query, and that is where the problems of implementation come in.

We tend to confuse the imaginary algorithm with the real. But algorithms are much more like Kirk than they are like Spock—they have to negotiate between competing powers and make something happen in the real world, however messy the process.

Consider the classic computer science problem of the traveling salesman: How can you calculate an efficient route through many destinations at various distances from one another? It’s a hard problem, one that nobody has figured out how to solve elegantly, though computer scientists have been generating algorithms and itineraries for more than 50 years. In other words, it’s a great imagination algorithm with many obvious applications in the real world. What is the most efficient way for me to pick up the 3,000 objects my children have scattered randomly around the house and return them to the correct locations, with my voyage beginning and ending at the couch? I could use an app for that.

The traveling salesman problem is a big deal if you’re routing delivery trucks, and indeed UPS has invested hundreds of millions of dollars in an algorithm called ORION that bases its decisions in part on state-of-the-art traveling salesman math. And yet as the company discovered, the math on its own is not that helpful for the drivers in brown. Here’s Jack Levis, one of the algorithm’s architects at UPS, quoted in the Wall Street Journal:

“The project was nearly killed in 2007, because it kept spitting out answers that we couldn’t implement,” Mr. Levis recalls. The earliest versions of Orion focused on getting the best mathematical results, with insufficient regard for the interests of the driver or the customer, who value some level of routine. For example, regular business customers who receive packages on a daily basis don’t want UPS to show up at 10 a.m. one day, and 5 p.m. the next. And a customer who is expecting a shipment of frozen food needs delivery as soon as possible, even if efficiency demands that someone gets priority.

The math problem imagines each destination as an identical point on a graph, while UPS drop-offs vary greatly in the amount of time they take to complete (hauling a heavy package up with a handcart, say, or avoiding the owner’s terrier). So when the traveling salesman is applied to actual business, the computer science question of optimizing paths through a network must share the stage with the autonomy of drivers and the unexpected interventions of other complex human systems, from traffic jams to pets.

When you print it out, ORION is about 1,000 pages long. It’s more like a federal budget document than something Alan Turing would have scrawled on a chalkboard. It has hundreds of authors, or millions if you consider that a major part of the challenge is UPS’s “MyChoice” service, which allows customers to change where and when packages should arrive on the fly. In other words, the UPS “solution” to this tangled problem is not a static thing at all but a living process, a structure of code, people, data, assumptions, labor laws, traffic tickets, and all sorts of other stuff crammed together.

So ORION as implemented is a much more complicated beast than its intellectual roots in the traveling salesman problem. That gap between imagination and reality can be quite dangerous, like the Repricer Express software that dropped prices on thousands of items on Amazon to a penny, nearly bankrupting a number of small businesses. And those are just the scandals—algorithms are constantly nudging and shifting our choices, sometimes by design but often by accident. And the illusion of computational perfection creates blind spots that can be very dangerous when the algorithm in question is vetting job candidates or awarding loans. In these cases we want the fair, objective, imaginary algorithm to make the world a better place, but the judgment machines encode all sorts of assumptions and biases into the real thing, the tangle of code and data hidden away inside the black box.

The gap matters because many of the most powerful corporations in existence today are essentially cultural wrappers for complex algorithms. Google started with PageRank and found its forest of money trees with Adwords and AdSense. Amazon’s transformational algorithm involved not just computation but logistics, finding ways to outsource, outmaneuver, and outsell traditional booksellers (and later sellers of almost every kind of consumer product). Facebook developed the world’s most successful social algorithm for putting people in contact with one another. These black boxes are heavily fortified and constantly tweaked, but the sales pitch is all about the magic of computational perfection. It all works great until it doesn’t, like Amazon removing thousands of LGBTQ books from its sales rankings in 2009, or Google photo software tagging black people as gorillas.

These are serious problems, but they are symptoms that can usually be treated by stuffing a few more exceptions and workarounds into the black box in question. The root causes are much harder to trace—the subtle errors, the shifts of meaning and context that take place under the cover of computational magic. The search results that never appear, the products that are choked off from sale, the things that algorithms don’t know and that we easily forget. China has launched a new citizenship score initiative that combines elements of a credit score with consumer monitoring to create a politically slanted “trustworthiness” index. And yes, it’s a “complex algorithm” at work behind the scenes. Imagine the challenge of fixing a credit reporting error in the U.S.—already nightmarish—and then also having to argue that playing a lot of video games doesn’t make you a bad person and that you haven’t been engaging in “trust-breaking acts.” From the perspective of companies and, increasingly, governments, the best thing about the imaginary algorithm is there’s nothing there to argue against.

This piece is excerpted from a book in progress under contract with MIT Press

This article is part of the algorithm installment of Futurography, a series in which Future Tense introduces readers to the technologies that will define tomorrow. Each month from January through June 2016, we’ll choose a new technology and break it down. Read more from Futurography on algorithms:

Future Tense is a collaboration among Arizona State University, New America, and Slate. To get the latest from Futurography in your inbox, sign up for the weekly Future Tense newsletter.