Future Tense

Bots on the Beat

How can we instill journalistic ethics in robot reporters?

journalism ethics.
Journalists earn their audience’s respect through diligence, ethical decision-making, and transparency.

Photo by Digital Vision/Thinkstock

On March 24 at 2:36 p.m., the New York Post reported that the body of former Nets basketball player Quinton Ross had been found in a shallow grave on Long Island. A few minutes later, the paper corrected the story to indicate the victim was a different man by the same name. But it was already too late. Other news sources had picked it up—including a robot journalist.

Created by Google engineer Thomas Steiner, Wikipedia Live Monitor is a news bot designed to detect breaking news events. It does this by listening to the velocity and concurrent edits across 287 language versions of Wikipedia. The theory is that if lots of people are editing Wikipedia pages in different languages about the same event and at the same time, then chances are something big and breaking is going on.

At 3:09 p.m. the bot recognized the apparent death of Quinton Ross (the basketball player) as a breaking news event—there had been eight edits by five editors in three languages. The bot sent a tweet. Twelve minutes later, the page’s information was corrected. But the bot remained silent. No correction. It had shared what it thought was breaking news, and that was that. Like any journalist, these bots can make mistakes.

To mark the growing interest in online news and information bots, the Knight-Mozilla OpenNews project deemed last week “botweek.” Narrative Science and Automated Insights run robot journalists that produce sports or finance stories straight from the data, some of which are indistinguishable from human-written stories. The Quake Bot from the Los Angeles Times made news recently, including here at Slate, by posting the first news report about a 4.7-magnitude earthquake that hit the area. Twitter bots like NailbiterBot or the New York Times 4th Down Bot produce newsy sports tweets to entertain and engage, while the TreasuryIO bot keeps a watchful eye on U.S. federal spending. Researcher Tim Hwang at the Data & Society Research Institute is looking at how to use bots to detect misinformation on social networks and target those people, or others around them, to try to correct it.

As these news bots and the algorithms and data that run them become a bigger part of our online media ecosystem, it’s worth asking: Can we trust them? Traditionally, journalists have earned their audience’s respect through diligence, ethical decision-making, and transparency. Yet computer algorithms may be entirely opaque in how they work, necessitating new methods for holding them accountable.

Let’s consider here one value that a robot journalist might embody as a way of building trust with its audience: transparency. What would a standard transparency policy look like for such a bot?

At its core, transparency is a user-experience problem. Some have argued, and rightly so, for the ability to file a Freedom of Information Act request for the source code of algorithms used by the government. Some tweet-spewing news bots are open-sourced already, like TreasuryIO. But source code doesn’t really buy us a good user experience for transparency. For one thing, it takes some technical expertise to know what you’re looking at. And as a programmer, I find it challenging to revisit and understand old code that I myself have written, let alone someone else. Furthermore, examining source code introduces another bugaboo: versions. Which version of the source code is actually running the Twitter bot?

No, at the end of the day, we don’t really want source code. We want to know the editorial biases, mistakes, and tuning criteria of these bots as they are used in practice—presented in an accessible way. Usable transparency demands a more abstract and higher-level description of what’s important about the bot: more “restaurant inspection score”–style than unreadable spaghetti code.

Data provenance is another key facet of a transparency policy. What’s the quality of the data feeding the bot? In the example of the Quake Bot, the data is relatively clean, since it’s delivered by a government agency, the United States Geological Survey. But the Wikipedia Live Monitor is operating off of entirely noisy (and potentially manipulable) social signals about online editing activity.

Imagine a hacker who knows how an automated news bot consumes data to produce news content. That hacker could infiltrate the bot’s data source, pollute it, and possibly spread misinformation as that data is converted into consumable media. I wouldn’t want my hedge fund trading on bot-generated tweets.

Luckily for us, there are already some excellent examples of accountable bots out there. One bot in particular, the New York Times’ 4th Down Bot, is exemplary in its transparency. The bot uses a model built on data collected from NFL games going back to the year 2000. For every fourth down in a game, it uses that model to decide whether the coach should ideally “go for it,” “punt,” or “go for a field goal.” The bot’s creators, Brian Burke and Kevin Quealy, endowed it with an attitude and some guff, so it’s entertaining when it tweets out what it thinks the coach should do.

Burke and Quealy do a deft job of explaining how the bot defines its world. It pays attention to the yard line on the fourth down as well as how many minutes are left in the game. Those are the inputs to the algorithm. It also defines two criteria that inform its predictions: expected points and win percentage. The model’s limitations are clearly delineated—it can’t do overtime properly for instance. And Burke and Quealy explain the bias of the bot, too: With its data-driven bravado, it’s less conservative than the average NFL coach.

Two things that the bot could be more transparent about are its uncertainty—how sure it is in its recommendations—and its accuracy. Right now the bot essentially just says, “Here’s my prediction from the data”—there’s no real assessment of how it’s doing overall for the season. Bots need to learn to explain themselves: not just what they know, but how they know it.

Others working on algorithmic transparency—in newsrooms, elsewhere in media, or even in government—might use this as a first-class case study. Visualize the model. Put it in context. Explain your definitions, data, heuristics, assumptions, and limitations—but also don’t forget to build trust by providing accuracy and uncertainty information.

The cliché phrase “you’re only human” is often invoked to cover for our foibles, mistakes, and misgivings as human beings, to make us feel better when we flub up. And human journalists certainly make plenty of those. Craig Silverman over at Poynter writes an entire column called Regret the Error about journalists’ mistakes.

But bots aren’t perfect, either. Every robot reporter needs an editor, and probably the binary equivalent of a journalism ethics course, too. We’d be smart to remember this as we build the next generation of our automated information platforms.

This article is part of Future Tense, a collaboration among Arizona State University, the New America Foundation, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.