The First Amendment protects the reporter who examines the campaign donations to each U.S. representative and then calculates the open-market value of their votes in Congress. It also covers the artist who paints a mural tying specific state legislators to lax environmental regulations that caused the deaths of children.
But what if C-3PO wrote that story? He's “fluent in over 6 million forms of communication”—can the federal government prohibit his speech in all of them? What if C-3PO painted the mural—can the state government stop his tortured artistic soul from breaking through his bright and shiny exterior?
There are already active concerns about how the First Amendment protects reporters who use drones. In multiple incidents recently, police arrested or intimidated a reporter who used a drone to review an accident scene. The reporters argue that because they are using the drones to observe police activity from a public space as part of a news story, the drone activities are protected under the First Amendment. Police, meanwhile, have claimed that they have stopped drones or arrested reporters because of legitimate safety concerns—for instance, in one case, they said that a journalist’s drone interfered with a helicopter approaching an accident scene.
C-3PO's First Amendment claim is different. First, he would never run into an accident scene willingly—if he's there, it's only because R2-D2 pushed him there, in which case, the police should arrest that droid. Second, C-3PO isn't a tool someone uses to engage in self-expression—he creates and expresses ideas all his own. Does the First Amendment apply to robots?
A literal reading of the text of the First Amendment suggests that it does: It simply states that the government "shall make no law ... abridging the freedom of speech, or of the press." Nothing there specifically suggests freedom of speech is limited to people. (In contrast, U.S. copyright and patent laws clearly indicate that only human beings qualify as authors and inventors.)
Although we're far, far away from artificial intelligence like C-3PO, AI that might need First Amendment protection is already here and getting more sophisticated all the time. There are robots that autonomously create original paintings and programs that compose original music. At the recent Silicon Valley Contemporary Art Fair, graffiti artist KATSU showcased several works he created with a drone. When a reporter for Vice’s Motherboard asked him if he had “pretty strict control over how the piece looks, or does the drone do its own thing,” he responded, “it’s like 50 percent me having control and 50 percent the drone. …You begin to understand that this really can only be accomplished through this bizarre dance between me controlling the drone and the drone doing its own thing.”
Similarly, Wired recently ran a story about a start-up that is attempting to create imaginative AI. If that technology is successful and combined with drones like KATSU’s, or with writing programs like the ones created by Automated Insights and Narrative Science, there is the serious potential for art, music, and commentary that is offensive, thought-provoking, and possibly even dangerous in the same way that human-created art, music, and commentary have been forever. In that case, local, state, and federal governments may want to limit robot speech.
Although it’s impossible to know exactly what an ordinance or statute prohibiting or limiting robot speech would look like, I think it’s easy to predict that there will be some attempt once robot journalists don’t need a human helping hand. Consider just a few of the attempts that have been made to limit the free speech of people:
- In 1913, Florida enacted a law that required newspapers to give equal space to the opponents of the candidates the papers endorsed. It was good law until the U.S. Supreme Court found it unconstitutional in 1974.
- In 1927, Minnesota enacted a law that permitted courts to shut down a newspaper viewed as “malicious, scandalous and defamatory.” The Supreme Court found it unconstitutional in 1931.
- In 1932, the Los Angeles City Council passed an ordinance that criminalized the distribution of anonymous pamphlets. It was an actively enforced law until the Supreme Court found it unconstitutional in 1960.
If elected representatives could try to limit the First Amendment rights of voters, they will certainly try to limit the First Amendment rights of voters’ computers—especially if AIs can use their unique abilities to create particularly damning stories that, say, connect deadly traffic accidents to particular legislators’ votes. Perhaps they would cover up their motives by claiming that AIs weren’t reviewing the right data or interpreting it properly. They might even look at previous attempts to limit free speech and borrow a common strategy: cite a larger problem. Maybe it’s jobs—“The publishing industry has lost too many jobs. We need to limit the involvement of machines in our news and reporting.” Or maybe it’s national security—“Because of cyber security concerns, it is irresponsible to let computers analyze and report on our national problems when cyber terrorists can so easily access it.”
But the fact is that robots, computer programs, and AI deserve the same free speech protections as humans. The Constitution doesn't differentiate between human and machine speech; it only provides for free speech.
Admittedly, if you go looking for further legal justification for protecting robot speech, you are unlikely to find it. To the best of my knowledge, there is no statute, ordinance, or case law that affirmatively protects AI expression. But that is only because AI and autonomous technology are too new for those laws to exist yet. That is why so many states are scrambling to enact laws that govern self-driving cars: legislators know their states’ existing laws don’t govern them.
It is important to recognize that the First Amendment does not limit its protection to human beings. Without that protection, it will only be a matter of time before towns, states, and Congress pass laws that restrict robot speech, limiting how machines can contribute new and worthwhile ideas. Until creative AI is given a greater and constitutionally protected chance to express itself, we can’t know how useful ideas from droids like C-3PO may be. These could be the droids we're looking for.
TODAY IN SLATE
How Canada’s Shooting Tragedies Have Shaped Its Gun Control Politics
Where Ebola Lives Between Outbreaks
Gunman Killed Inside Canadian Parliament; Soldier Shot at National Monument Dies
Sleater-Kinney Was Once America’s Best Rock Band
Can it be again?
Paul Farmer: Up to 90 Percent of Ebola Patients Should Survive
Is he right?
“I’m Not a Scientist” Is No Excuse
Politicians brag about their ignorance while making ignorant decisions.
Driving in Circles
The autonomous Google car may never actually happen.