Robot Reporters Need Some Journalistic Ethics

What's to come?
April 2 2014 7:27 AM

Bots on the Beat

How can we instill journalistic ethics in robot reporters?

journalism ethics.
Journalists earn their audience’s respect through diligence, ethical decision-making, and transparency.

Photo by Digital Vision/Thinkstock

On March 24 at 2:36 p.m., the New York Post reported that the body of former Nets basketball player Quinton Ross had been found in a shallow grave on Long Island. A few minutes later, the paper corrected the story to indicate the victim was a different man by the same name. But it was already too late. Other news sources had picked it up—including a robot journalist.

Created by Google engineer Thomas Steiner, Wikipedia Live Monitor is a news bot designed to detect breaking news events. It does this by listening to the velocity and concurrent edits across 287 language versions of Wikipedia. The theory is that if lots of people are editing Wikipedia pages in different languages about the same event and at the same time, then chances are something big and breaking is going on.

At 3:09 p.m. the bot recognized the apparent death of Quinton Ross (the basketball player) as a breaking news event—there had been eight edits by five editors in three languages. The bot sent a tweet. Twelve minutes later, the page’s information was corrected. But the bot remained silent. No correction. It had shared what it thought was breaking news, and that was that. Like any journalist, these bots can make mistakes.


To mark the growing interest in online news and information bots, the Knight-Mozilla OpenNews project deemed last week “botweek.” Narrative Science and Automated Insights run robot journalists that produce sports or finance stories straight from the data, some of which are indistinguishable from human-written stories. The Quake Bot from the Los Angeles Times made news recently, including here at Slate, by posting the first news report about a 4.7-magnitude earthquake that hit the area. Twitter bots like NailbiterBot or the New York Times 4th Down Bot produce newsy sports tweets to entertain and engage, while the TreasuryIO bot keeps a watchful eye on U.S. federal spending. Researcher Tim Hwang at the Data & Society Research Institute is looking at how to use bots to detect misinformation on social networks and target those people, or others around them, to try to correct it.

As these news bots and the algorithms and data that run them become a bigger part of our online media ecosystem, it’s worth asking: Can we trust them? Traditionally, journalists have earned their audience’s respect through diligence, ethical decision-making, and transparency. Yet computer algorithms may be entirely opaque in how they work, necessitating new methods for holding them accountable.

Let’s consider here one value that a robot journalist might embody as a way of building trust with its audience: transparency. What would a standard transparency policy look like for such a bot?

At its core, transparency is a user-experience problem. Some have argued, and rightly so, for the ability to file a Freedom of Information Act request for the source code of algorithms used by the government. Some tweet-spewing news bots are open-sourced already, like TreasuryIO. But source code doesn’t really buy us a good user experience for transparency. For one thing, it takes some technical expertise to know what you’re looking at. And as a programmer, I find it challenging to revisit and understand old code that I myself have written, let alone someone else. Furthermore, examining source code introduces another bugaboo: versions. Which version of the source code is actually running the Twitter bot?

No, at the end of the day, we don’t really want source code. We want to know the editorial biases, mistakes, and tuning criteria of these bots as they are used in practice—presented in an accessible way. Usable transparency demands a more abstract and higher-level description of what’s important about the bot: more “restaurant inspection score”–style than unreadable spaghetti code.



Meet the New Bosses

How the Republicans would run the Senate.

The Government Is Giving Millions of Dollars in Electric-Car Subsidies to the Wrong Drivers

Scotland Is Just the Beginning. Expect More Political Earthquakes in Europe.

Cheez-Its. Ritz. Triscuits.

Why all cracker names sound alike.

Friends Was the Last Purely Pleasurable Sitcom

The Eye

This Whimsical Driverless Car Imagines Transportation in 2059

Medical Examiner

Did America Get Fat by Drinking Diet Soda?  

A high-profile study points the finger at artificial sweeteners.

The Afghan Town With a Legitimately Good Tourism Pitch

A Futurama Writer on How the Vietnam War Shaped the Series

  News & Politics
Sept. 21 2014 11:34 PM People’s Climate March in Photos Hundreds of thousands of marchers took to the streets of NYC in the largest climate rally in history.
Business Insider
Sept. 20 2014 6:30 AM The Man Making Bill Gates Richer
Sept. 20 2014 7:27 AM How Do Plants Grow Aboard the International Space Station?
  Double X
The XX Factor
Sept. 19 2014 4:58 PM Steubenville Gets the Lifetime Treatment (And a Cheerleader Erupts Into Flames)
  Slate Plus
Tv Club
Sept. 21 2014 1:15 PM The Slate Doctor Who Podcast: Episode 5  A spoiler-filled discussion of "Time Heist."
Sept. 21 2014 9:00 PM Attractive People Being Funny While Doing Amusing and Sometimes Romantic Things Don’t dismiss it. Friends was a truly great show.
Future Tense
Sept. 21 2014 11:38 PM “Welcome to the War of Tomorrow” How Futurama’s writers depicted asymmetrical warfare.
  Health & Science
The Good Word
Sept. 21 2014 11:44 PM Does This Name Make Me Sound High-Fat? Why it just seems so right to call a cracker “Cheez-It.”
Sports Nut
Sept. 18 2014 11:42 AM Grandmaster Clash One of the most amazing feats in chess history just happened, and no one noticed.