Future Tense

Who’s Responsible When a Twitter Bot Sends a Threatening Tweet?

Can a Twitter bot be held responsible for a death threat?

Photo by LEON NEAL/AFP/Getty Images

If the future of artificial intelligence is broad enough to include law-breaking robots, that future is a lot closer. Rejoice, fans of Robot & Frank.

Recently, Jeffrey van der Goot of Amsterdam was questioned by police after a Twitter bot he owned autonomously composed and tweeted a death threat. While conversing with another Twitter bot, van der Goot’s Twitter bot tweeted “I seriously want to kill people” at a fashion event in Amsterdam. The bot used the text of tweets van der Goot wrote to compose new tweets of its own. Based on van der Goot’s explanation, the bot was programmed merely to create comprehendible sentences, not sentences with particular meaning or intent.

Van der Goot further reports the police claimed he was legally responsible because the bot was in his name and used his words, which he finds questionable. The bot’s creator apologized, while also claiming ignorance about “who is/should be held responsible (if anyone).”

So who is responsible? I can’t claim familiarity with Dutch law (although I’m assuming tulips have special rights), but I’ve argued before that in the United States the First Amendment protection of free speech extends to robots—including bots. It would be unconstitutional for American police to require van der Goot to shut down the bot if its speech qualified as constitutionally protected speech.

“True threats” are not protected under the First Amendment, and the Amsterdam police certainly considered the bot’s tweet just that. But it’s hard to say whether the tweet expressed sufficient “intent to intimidate” to qualify as a true threat without having more information about the tweet, the bot, and the circumstances around it. In Virginia v. Black, the U.S. Supreme Court ruled that the government “may choose to prohibit only those forms of intimidation that are most likely to inspire fear of bodily harm.” If the bot’s tweet was not reasonably likely to inspire fear of bodily harm given all of the facts, the Constitution would have prohibited police efforts to delete it, even though it’s not a person. If the bot’s tweet falls within constitutionally protected speech, no one would have had criminal responsibility because there would be no crime.

In fact, it is likely a stretch to say that the bot’s tweet could be a “true threat” under American law because intent is required, and the bot is programmed to randomly generate tweets. But let’s assume the tweet in question constituted a “true threat” and was not protected speech. Would anyone be liable? I’ve argued before for laws that address computers, programs, and robots that make decisions through artificial intelligence or autonomous technology and without direct human input or supervision. This is a good example of laws failing because they do not anticipate that a robot could issue a true threat. Our laws all assume that only human beings can speak, write, etc. Even though van der Goot owns the program, laws that prohibit threats, harassment, etc. only prohibit people from engaging in those activities. They do not prohibit programs or robots from threatening or harassing. This creates a large loophole for some tech-savvy stalkers to create autonomous programs or devices that can harass their victims with apparent impunity under the law. Considering that many domestic violence experts worry that existing anti-harassment and stalking laws don’t do enough to protect victims (and, as the recent Gamergate controversy demonstrated, Twitter can already be used as a means to a major problem with threats against women), this is potentially very worrisome.

This scenario is reminiscent of how California, Nevada, and Florida have addressed autonomous vehicles. Each state specifically identifies the person who turns the car on as the operator if there isn’t a person in the traditional driver’s seat. This is partially an effort by those states to address the fact that their prior laws and regulations assumed that there would only be human drivers.

Similarly, if states or the federal government want to prohibit bots from making true threats or harassing individuals, they should pass laws that acknowledge the existing regulations do not specifically address autonomous writing technology before addressing liability for bots and programs that engage in speech that is not protected by the Constitution. Legislators can determine for themselves if they want liability to rest with the owner of the bot, the user of the bot (if that is relevant), or the creator.  However, they should bear in mind that Robot eventually wanted to take the fall for Frank’s criminal activity. Maybe van der Goot’s bot would opt to do the same thing.