Future Tense

The Tyranny of Algorithms: A Future Tense Event Recap

Laura Moy, Ian Bogost, and David Auerbach discuss “The Tyranny of Algorithms” at New America.

Photo by Simone McPhail/New America

They are “terrifying black boxes.” They are “the poetry of computation.” There is a “mystical element” to the way we speak about them. They “can … illuminate some of our human biases.” They are … algorithms?

On Thursday, Dec. 10, Future Tense discussed “The Tyranny of Algorithms” at New America in Washington, D.C. During the three-hour event, computer scientists, journalists, policy experts, and others discussed how algorithms are influencing our lives. Perhaps more importantly, they discussed how our conversations about algorithms are undermined by people’s misunderstanding.

So what, exactly, are algorithms? “Algorithms are … the intersection between the idealism of mathematics, the idealism of policy, the idealism of big ideas, and the pragmatism of building a system that actually functions in the real world,” Ed Finn, the academic director of Future Tense, said during the introduction to the event. But algorithms are also, at heart, recipes, step-by-step instructions. Or maybe they are marionettes: Throughout the event, speakers came back to the idea of “pulling strings.”

That image was especially relevant in discussions about algorithmic transparency. Algorithms go far beyond the Netflix recommendation with which so many of us are familiar. (Have you ever been both offended and impressed by a Netflix suggestion? It happens to me regularly.) They can write news articles, determine qualification for government services or benefits, identify people who are purportedly more likely to commit crimes, and much more. When algorithmic decision-making can have such life-altering consequences, it’s critical to know what assumptions are baked into code. Nick Diakopoulos, an assistant professor at the University of Maryland’s College of Journalism, said that it’s important for corporations and governments to “develop mechanisms and standards for what you can disclose about the data, about the algorithms.” He’s hopeful that this will happen, with a little consumer pressure: “What we’re starting see the first inklings that there will be a demand for … disclosure.” Case in point: the uproar that followed Facebook’s infamous “emotional contagion” study.

Beyond accountability, transparency can provide an opportunity to verify results. “Who checks to see if Google search results are accurate?” said David Auerbach, Slate’s Bitwise columnist and a Future Tense fellow at New America. Holding algorithms accountable can help engender trust.

But there’s a flip side to algorithmic accountability. Laura Moy of the Open Technology Instituted noted an algorithm might be able to scan job applications and note if a hiring manager had a pattern of passing on résumés with indicators that an applicant was a minority. It could be an unconscious bias—and perhaps, once brought to his attention, the hiring manager would take steps to change his practices. Auerbach agreed, suggesting that “if Facebook shows you ‘Here are three keywords we associate with your interests’ and shows you three horrible things,” then maybe you can take a hard look at yourself.

But there is a danger in taking algorithms without skepticism, or with trying to connect dotes that might not actually be related. Jennifer Golbeck, an associate professor at the University of Maryland’s College of Information Studies, said that people tend to cling to single statistical insights—for instance, that algorithms have discovered that people who like curly fries on Facebook tend to be smarter—without thinking about the broad story. Moy hit a similar note, saying that people can mistakenly believe that a statistically significant correlation is definitive.

Despite all of the caveats presented at the event, our Age of Algorithms comes with some bright sides. Golbeck pointed out that, online, individuals can often be reduced to a single bad moment on social media. A terribly worded tweet can sink a person’s career and open them up to infamy. But “algorithms see beyond that,” taking into account a whole person, or at least a whole person as they are represented online. (Algorithms: Less judgmental than the media!) And if you don’t like how you think algorithms are summing you up, there are some steps you can take. Jacqueline Wernimont, an assistant professor of English at Arizona State, said that people are putting forth a “ton of effort … to subvert” tools and systems that try to paint a data portrait of people. “How do you get the algorithm to treat you the way you want to be treated?” she asked. For instance, you can attempt to create false data, by using a browser extension that clicks on every ad on a page, to confuse data-collectors about your interests. Maybe that isn’t exactly how you want to be treated (you can’t exactly do unto algorithms as you would like them to do to you), but it would give you an extra degree of privacy.

Algorithms, then, are instructive pieces of information about our world, ourselves, and our behaviors—and not what we often think they are. The panelists noted that a lack of computational understanding may contribute to people’s misgivings about algorithms. Golbeck suggested that people could “learn about algorithms without having to learn about computer science. … You might start with some kind of basic tutorials on the Turing machine. … [I]t starts to help you see, ‘OK, these algorithms [are] not this mythical thing.”

Science fiction writer and UMD assistant professor Lee Konstantinou may have summed the event up best. His short story about algorithms concluded, “The tyranny of algorithms is nothing more than the tyranny of the past over the present.” It’s about how your past—reduced to bits of data, often out of context—dictates what happens to you today, and tomorrow. And that’s why it’s important for us to continue to interrogate algorithms—all that they are—and discuss their effects.

Visit the New America website to watch the event in its entirety.

Related in Future Tense: