Future Tense

How Politicians Should and Shouldn’t Use Twitter Bots

They aren’t necessarily a bad thing—but we need rules to govern their use.

Donald Trump.
Donald Trump.

Illustration by Sofya Levina. Photo by Drew Hallowell/Getty Images.

Donald Trump knows the value of bombastic rhetoric, rash promises, and the brevity of 140 characters. The attention tycoon uses Twitter in a way that defies explanation. He mistakenly quotes Mussolini and leftist union organizers. He shoots from the hip when it comes to correct spelling.

But he is not alone in his romance with social media. All of the U.S. presidential candidates are using new technological strategies to deliver one-liners and attention-grabbing pledges to the collective consciousness. But aspiring politicos who trip over themselves in attempts to seem tech literate aren’t always the ones doing the tweeting. Social bots, computer programs that mimic people, are taking the place of candidates and campaign staffs on social media.

It’s all part of a larger trend toward automated politics. Campaigns and officials worldwide now use bots for a multitude of tasks beyond simple social media account management. These uses range from the seemingly mundane—sending out messages on particular pieces of legislation—to the downright nefarious—spreading covert political propaganda.

In Mexico, government actors have used bots to silence protestors and activists. In Turkey, these software-driven accounts send out pro-regime propaganda. Political bots have also been used—in the United States, Europe, and beyond—to drive up follower counts of candidates and causes. This social padding gives off the illusion of popularity, subtly manipulating public opinion during contested elections.

The Project on Computational Propaganda, a research endeavor at the University of Washington and Oxford Internet Institute, splits manipulative political bots into two categories, controllers and facilitators. Controller bots fake, manipulate, and jam discourse on social media, while facilitator bots work to share, spread, and challenge it. In the last three U.S. national elections, controllers have been instrumental in AstroTurf political campaigns, mimicking real voters in order to give false impressions of candidate popularity or civic vigor. Researchers have tied AstroTurf bots to former Speaker of the House John Boehner and the conservative website Freedomist.com. More recently, bots supportive of the Trump campaign apparently made an appearance during the Nevada GOP primary. (We should note that it’s not clear that his campaign officially sanctioned the purported bots.)

The increasing prevalence and likely future sophistication of automated politics raises questions about the public sphere and how we want it to function in the era of social media. How might these techniques support or undermine the marketplace of ideas online? What should or should not be permitted? How should rules in the space be enforced?

Academics, policymakers, journalists, and a slew of other professionals are working to understand and address the rise in politicized social bots and automated politics more generally. Last December, we hosted a workshop bringing together experts from a variety of backgrounds in an attempt to frame the key questions around political automation, with a look at bots in particular. The event was a part of the ongoing work of the Intelligence and Autonomy Initiative, a project based at Data and Society that examines the policymaking challenges produced by advances in machine intelligence.

Political bots are challenging in part because they are dual-use. Even though many of the bot deployments we see are designed to manipulate social media and suppress discourse, bots aren’t inherently corrosive to the public sphere. There are numerous examples of bots deployed by media organizations, artists, and cultural commentators oriented toward raising awareness and autonomously “radiating” relevant news to the public. For instance, @stopandfrisk tweets information on the every instance of stop-and-frisk in New York City in order to highlight the embattled policy. On the other hand, @staywokebot sends messages related to the Black Lives Matter movement.

This is true of bots in general, even when they aren’t involved in politics. Intelligent systems can be used for all sorts of beneficial things—they can conserve energy and can even save lives—but they can also be used to waste resources and forfeit free speech. Ultimately, the real challenge doesn’t lie in some inherent quality of the technology but the incentives that encourage certain beneficial or harmful uses.

The upshot of this is that we should not simply block or allow all bots—the act of automation alone poses no threat to open discourse online. Instead, the challenge is to design a regime that encourages positive uses while effectively hindering negative uses. This will require three key elements.

First, the bot ecosystem must be open. Honesty in bot origin, design, and usage is a crucial feature for a Web where useful automation is possible while manipulative automation is prohibited. Designers, implementers, and regulators must work to support a norm that bots designed for democracy and the public good must declare their “botness” to users. These same groups must also work to stop the deluge of bots that attempt to hide behind the conceit that they are real humans. The public needs tools that can easily identify bots in their social ecosystem and assist in ascertaining their intent.

Second, bots must be humble. More often than not, dark techniques of political automation harm the public sphere because they embed unrealistic assumptions about what those techniques might achieve. Presumably the dream of acquiring masses of fake followers is to persuade the public that a candidate’s grass-roots support is larger than it is. Similarly, the dream of using masses of spam bots to shout down opposing views is to influence the public and disrupt the opposing side. In both cases, as far as we know, these techniques have been revealed by journalists, failed to impact the opposition, and only resulted in masses of largely ignored spam. It is precisely the humble bots—those that respect the autonomy of the public, persuade through masterful presentation of information, and contribute to the discourse—that are the most influential. The bot ecosystem should aim to encourage humble bot design over bots that simply shout the loudest.

Third, platforms must be part of the solution. It’s not acceptable for social media companies to allow political actors to use bots as proxies for spreading covert propaganda or as surrogates for real followers. The manipulation of public opinion online is no more acceptable than the same practice offline. While online platforms have hidden for a long time behind the neutrality and immunity of “platformhood,” they are uniquely well-positioned to support a positive bot ecosystem. These platforms have a vested economic interest in getting involved, too: Bad bots might easily overwhelm productive discussion, leading to an erosion of trust and the usefulness of a platform.

Ultimately, digitization drives automation. The digitization of capital markets enabled the rise of algorithmic trading bots. The digitization of music enabled the rise of automated recommendation systems. So the digitization of politics embeds within it the rise of a new automated politics. It is key that communities, commentators, and regulators begin to consciously work toward a positive vision of what that looks like now, before the negative trends we see in full relief during this electoral cycle come to dominate the space.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.