Future Tense

What Algorithms Want

It’s about much more than serving up slightly creepy ads.

Algorithm
How many times have you wondered why a particular ad was served up to you, or why some recommendation system keeps trying to make you like a particular artist?

Photo illustration by Lisa Larson-Walker. Photo by Michael Heim/Thinkstock.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. On Thursday, Dec. 10, Future Tense will host a three-hour conversation on “The Tyranny of Algorithms” in Washington, D.C. For more information and to RSVP, visit the New America website.

We spend an awful lot of time now thinking about what algorithms know about us: the ads we see online, the deep archive of our search history, the automated photo-tagging of our families. We don’t spend as much time asking what algorithms want. In some ways, it’s a ridiculous question, at least for now: Humans create computational systems to complete certain tasks or solve particular problems, so any kind of intention or agency would have to be built in, right?

This would be an acceptable answer if algorithms didn’t happen to surprise us so often. But surprise us they do, from the mundane yet hilarious autocorrect and transcription fails to the more troubling instances of complex behaviors, like the cascading bad choices high-frequency trading algorithms made that contributed to the 2010 “Flash Crash.”* There’s an interesting philosophical question lurking in there—where does the surprise come from, exactly? Do complex systems sometimes behave in ways that are objectively, statistically surprising? Or is the term surprise a human invention, another storytelling crutch for mammals whose brains were never well-suited for the rational evaluation of complex cultural systems?

Whatever the answer, algorithms’ ramifications for culture stay the same: We perceive these systems as having agency in the world, thinking and doing things according to some kind of logic or plan (whether designed or self-organizing). We expect these computational gods to have reasons for the things they do, even when the reasons are silly or misconstrued versions of human intentions. And even if you think the agency is all baked in by human engineers and designers, it’s worth noting that they are often just as surprised by their creations as anyone else. My favorite example of this is the honesty and even poetry with which Netflix VP and recommendations guru Todd Yellin admitted that he had no idea why his algorithm had such a fascination with Perry Mason:

Let me get philosophical for a minute. In a human world, life is made interesting by serendipity,” Yellin told me. “The more complexity you add to a machine world, you’re adding serendipity that you couldn’t imagine. Perry Mason is going to happen. These ghosts in the machine are always going to be a byproduct of the complexity. And sometimes we call it a bug and sometimes we call it a feature.*

The question of what algorithms want may be a human phantasm projected onto barren silicon, but I think that only makes it more interesting. Because while we rarely admit to asking what algorithms want, we are constantly answering the question as we make assumptions about algorithmic desires. We think Siri and other robot conversationalists want to hear us speak in particular ways—shouting, or clipping our syllables, or using the right kind of accent. We organize our thoughts (and, increasingly, our websites and archives) according to noun-heavy keyword-ese that we think Google might find easier to understand. Some luckless souls spend hours trying to craft that perfect Facebook message most likely to be recirculated by its controversial algorithms.

Trying to spot algorithmic intentions has become something of a sport, when you think about it. How many times have you wondered why a particular ad was served up to you, or why some recommendation system keeps trying to make you like a particular artist? And of course there are whole career paths dedicated to the Kremlinology of decoding code: Wall Street quants, cybersecurity specialists, and search engine optimization types all spend tremendous energy on trying to predict and outfox computational systems.

So what do algorithms want? Mostly they want things that further entangle computation and culture. Algorithms want data: well-formatted, digestible, frequently updated data. You can think about the drive to digitize the world as a kind of Borges-like map that is gradually occupying the full territory it means to represent. We started with scanning books and digitizing phone calls, moving on to digitizing maps and images of every storefront and country lane, and now we’re turning to the hard stuff. Systems like Siri are as much about mapping human language as they are about reviewing your agenda. The quantified self makes our heart rates, sleep rhythms, and physical movements computationally tractable.

Once you get all this data, you need more servers, more fiber optic cables, and more sophisticated architecture to plug them all together. Most of this new data comes with more computation, too: the proliferating chips and sensors in our pockets, on our wrists and armbands, in our cars, our light bulbs … everywhere.

Algorithms want all of this in the same sense that we want it: not as a series of conscious choices but as the geologically vast movement of large assemblages. The big systems of human culture are moving toward quantified, perpetually updated processes, accounting for ever-smaller increments of life. The thickening layer of computation is part of this shift. It has its own logic, its own intentionality that can be construed to offer emancipation or Orwellian control but that always tugs us toward more computation.

This leads me to another philosophical question: If algorithms do want things, is there such a thing as algorithmic imagination? To want something, you have to have some capacity to envision a state of reality that does not currently exist. You need goals, or ambitions, or a sense of what might be better than right now. These are some of the questions researchers at the forefront of machine intelligence are grappling with, but they are also a way to peel the map back from the territory and see the layer of computation at work.

Without resolving our dilemma about where surprise really comes from, we can see flashes of something like an algorithmic imagination at those unexpected moments. The arresting images of Google’s Deep Dream algorithm, for instance, suggest a certain aesthetic intentionality that feels very alien to us humans. As the company’s engineers explained, they trained their advanced machine learning system to identify particular features of images, from edges to human faces. And then, in part to test how well this worked, they instructed the system: “ ‘Whatever you see there, I want more of it!’ This creates a feedback loop: if a cloud looks a little bit like a bird, the network will make it look more like a bird.” Deep Dream’s restless search for eyes, dogs, architectural structures, and other features creates phantasmagorical images that prompt my imagination to suggest that its imagination sees the world in a very different way. The list of things that a human might identify as important in an image is very different, and much shorter, than the list that Deep Dream defines and then pursues through thousands of minute enhancements.

I occasionally get the same spark, the same sense of algorithmic agency, when my phone autocorrects a phrase to make it deeply striking or embarrassing. I never remember these examples long enough to write them down, but others have been more diligent. And I suspect we’ve all encountered those moments of counter-serendipity when algorithms juxtapose an ad with an article that radically recontextualizes both in some interesting way.

Perhaps these moments of algorithmic imagination are all in our heads—another example of humanity holding up technology as a flawed mirror. But whether or not computational systems want anything, the layer of computation is here to stay, and we are living with it, and in it, more intimately every day.

Correction, Dec. 14, 2015: This piece misstated that high-frequency trading algorithms caused the 2010 “Flash Crash.” Research suggests that while the algorithms contributed to the crash, they did not cause it. Also, due to a production error, this paragraph was misattributed to the article’s author, Ed Finn. It was a quote from an Atlantic article.