Foreigners

The Trolls Among Us

If you want to comment on this article, you shouldn’t be allowed to be anonymous.

They could be anyone.

Photo by Liu Jin/AFP/Getty Images

LONDON—If you are reading this article on the Internet, stop afterward and think about it. Then scroll to the bottom and read the comments. Then recheck your views.

Chances are your thinking will have changed, especially if you have read a series of insulting, negative, or mocking remarks—as so often you will. Once upon a time, it seemed as if the Internet would be a place of civilized and open debate; now, unedited forums often deteriorate into insult exchanges. Like it or not, this matters: Multiple experiments have shown that perceptions of an article, its writer, or its subject can be profoundly shaped by anonymous online commentary, especially if it is harsh. One group of researchers found that rude comments “not only polarized readers, but they often changed a participant’s interpretation of the news story itself.” A digital analyst at Atlantic Media also discovered that people who read negative comments were more likely to judge that an article was of low quality and, regardless of the content, to doubt the truth of what it stated.

Some news organizations have responded by heavily curating comments. One Twitter campaigner, @AvoidComments, periodically reminds readers to ignore anonymous posters: “You wouldn’t listen to someone named Bonerman26 in real life. Don’t read the comments.” But none of that can prevent waves of insulting commentary from periodically washing over other parts of the Internet, infiltrating Facebook or overwhelming Twitter.

If all of this commentary were spontaneous, then this would simply be an interesting psychological phenomenon. But it is not. A friend who worked for a public relations company in Europe tells of companies that hire people to post, anonymously, positive words on behalf of their clients and negative words about rivals. Political parties of various kinds, in various countries, are rumored to do the same.

States have grown interested in joining the fray as well. Last year Russian journalists infiltrated an organization in St. Petersburg that pays people to post at least 100 comments a day; an investigation this past summer found that a well-connected businessman was paying Russian trolls to manage 10 Twitter accounts with up to 2,000 followers. In the wake of the Russian invasion of Ukraine, the Guardian admitted it was having trouble moderating what it called an “orchestrated campaign.” “Bye-bye Eddie,” tweeted the Estonian president a few months ago, as he blocked yet another Twitter troll.

The Russian trolls have been well-documented. But others may be preparing to join them. An Iranian educational group, Tavaana, has lately found its Facebook page blocked thanks to what it suspects was the activity of Iranian trolls. Famously, the Chinese government monitors the Internet inside China, using hundreds of thousands of paid bloggers. It can’t be long before they work out how to do the same in English, or Korean, or other languages as well.

For democracies, this is a serious challenge. Online commentary subtly shapes what voters think and feel, even if it just raises the level of irritation, or gives readers the impression that certain views are “controversial,” or makes them wonder what the “mainstream” version of events is concealing. For the most part, the Russian trolls aren’t supplying classic propaganda, designed to trumpet the glories of Soviet agriculture. Instead, as journalists Peter Pomerantsev and Michael Weiss have written in a paper analyzing the new tactics of disinformation, their purpose is rather “to sow confusion via conspiracy theories and proliferate falsehoods.” In a world where traditional journalism is weak and information is plentiful, that isn’t very difficult to do.

But no Western government wants to “censor” the Internet, either, and objections will always be raised if government money is spent even studying this phenomenon. Perhaps, as Pomerantsev and Weiss have also argued, we therefore need civic organizations or charities that can identify deliberately false messages and bring them to public attention. Perhaps schools, as they once taught students about newspapers, now need to teach a new sort of etiquette: how to recognize an Internet troll, how to distinguish truth from state-sponsored fiction.

Sooner or later, we may also be forced to end Internet anonymity, or at least to ensure that every online persona is linked back to a real person: Anyone who writes online should be as responsible for his words as if he were speaking them aloud. I know there are arguments in favor of anonymity, but too many people now abuse the privilege. Human rights, including the right to freedom of expression, should belong to real human beings, and not to anonymous trolls.