Artificial stupidity can be just as dangerous as artificial intelligence.

Artificial Stupidity Can Be Just As Dangerous As Artificial Intelligence

Artificial Stupidity Can Be Just As Dangerous As Artificial Intelligence

The citizen’s guide to the future.
April 13 2015 9:06 AM
FROM SLATE, NEW AMERICA, AND ASU

Meet the Bots

Artificial stupidity can be just as dangerous as artificial intelligence.

Alicia Vikander in Ex Machina.
Not just a pretty face: Alicia Vikander, right, in Ex Machina.

Photo courtesy Universal Pictures International

The day that science fiction writers have feared for so long has finally come—the machines have risen up. There is nowhere you can run and nowhere you can hide. The software “bot” onslaught is here, and every Homo sapien is a target of the limitless legions of unceasing, unemotional, and untiring automatons. Resistance is futile, silly human—the bots are on the march. To get a scale of the size of the automated army arrayed against us, consider that a 2014 story reported that one-third of all Web traffic is considered to be fake. The bots are pretending to be us.

Bots, like rats, have colonized an astounding range of environments. Play online video games? That dude with seemingly superhuman reflexes that keeps pwning you is probably a bot. Go on the online dating platform Tinder and you will be targeted by wave after wave of these rapacious robotic creatures as you search for love and companionship. Want to have a conversation with people on Twitter? Some of them are probably not human. Have the temerity to go up against the Kremlin or even the Mexican government with an opposing point of view? Call John Connor, because here come the bots—bots that try to relentlessly remind you of things favorable to the regime, bots that try to stop protests, and many other automated instruments of political repression. And, if that weren’t enough, hackers may use bots to automate a variety of dastardly deeds.

Tesla’s Elon Musk and the famous astrophysicist Stephen Hawking have become standard-bearers for the growing fear over artificial intelligence—but perhaps the most fascinating element here is that their warnings focus on hypothetical malicious automatons while ignoring real ones. Musk, in a recent interview, mused about whether we would be lucky if future robots enslaved us as pets. Yet today humankind is imperiled by a different type of bot onslaught from which there is no escaping, and Musk has not sounded the alarm. Perhaps that is due to the fact that the artificial menace behind this rise of the machines is not really anything we would consider to be “artificial intelligence.” Instead, to survey the bot armies marching across the Internet is to marvel at the power of artificial stupidity. Despite bots’ crudely coded, insectoid simplicity, they have managed to make a lot of people’s lives miserable.

Advertisement

So what’s a bot? Despite the name bot, these nonhuman Internet entities are not (contra to stock art in tech articles), literally robots typing on keyboards with metallic fingers. They are crude computer programs, ably defined by James Gleick in a New York Review of Books piece:

It’s understood now that, beside what we call the “real world,” we inhabit a variety of virtual worlds. Take Twitter. Or the Twitterverse. Twittersphere. You may think it’s a stretch to call this a “world,” but in many ways it has become a toy universe, populated by millions, most of whom resemble humans and may even, in their day jobs, be humans. But increasing numbers of Twitterers don’t even pretend to be human. Or worse, do pretend, when they are actually bots. “Bot” is of course short for robot. And bots are very, very tiny, skeletal, incapable robots—usually little more than a few crude lines of computer code. The scary thing is how easily we can be fooled.

So why is it called a “bot” despite the fact that it is far simpler than most real-world robots, which have complex software architectures? To answer this question is to go to some foundational debates about what machine intelligence really represents. In their textbook on artificial intelligence, David Poole and Alan Mackworth delineate several approaches to building artificial agents. One is to make a complex computer program that functions well in an environment simplified for the agent. For example, a factory robot can do well in its industrial home but might very well be lost outside that context. The other is to make a simple, buglike agent with limited abilities to reason and act but the ability to function in a complex and interactive environment. Many bio-inspired robots fit this design paradigm.

The simplest way to understand a bot, as computer security researcher David Geer notes, is as an “agent for a user or another program.” Although bots have a lot in common with Poole and Mackworth’s second, agent design paradigm, it is also fair to say that they sidestep artificial intelligence and its debates altogether. If artificial intelligences are surprisingly primitive and fragile, difficult to generalize to new environments, and based on a contradictory set of scientific assumptions, bots have no such problems. A.I. programs are the majestic lions and eagles of the artificial ecosystem, bots the disgusting yet evolutionarily successful cockroaches and termites. Many bots amount to automatic control programs roughly as sophisticated as a thermostat. Interested readers who want to make themselves a Reddit bot to help them read and reply to posts, for example, may consult this handy guide in the Python scripting language. But not all bots are even programmed in a high-level programming language. Take many game bots, for example. Third-party bots for the game Counter-Strike: Global Offensive just amount to configuration files that customize existing game opponents, and you can write bots for many games with the Windows program AutoHotkey.

Advertisement

Bots are easy to make, require a minimum of programming experience and software engineering knowledge, and can be tremendously profitable. And that is a large part of what makes botting serious business. This may seem like an exaggeration, given how frivolous Tinder bots, Twitter auto-trolls, and gamebots may seem. For example, I like to break up the monotony of grad school by trolling a Twitter bot by telling it to do better at passing its Turing test. I’ve even facetiously suggested setting up a bot school with fellow Slate Future Tense contributor Miles Brundage to help it gain some “intelligence.”

However, the toll that bots have exacted is no laughing matter. The Gameover Zeus botnet, for example, cost its small-business targets about $100 million in losses in the United States and infected about 1 million computers worldwide. And when the aim is using bots to suppress political speech, the damage is difficult to quantify but meaningful all the same. The trouble with bots lies in the implications of their capacity to fool, and their sheer numbers.

When the filmmakers behind the new artificial intelligence movie Ex Machina went to the drawing board for an attention-grabbing guerrilla ad campaign to promote their flick, they didn’t build a robot of their own. Instead, they made a Tinder bot. Lonely SXSW festivalgoers on the online dating platform interacted with an alluring woman who asked them what it meant to be human … only to find out that “she” was a scripted computer program with the face of Ex Machina’s lead actress preprogrammed to flirt with them. While scripted chatbots are as old as A.I. itself, this fake SXSW temptress is emblematic of a larger, worrying trend. 

Skilled Tinder bot programmers, for example, can fool gullible users by designing and scripting their bots to mimic the proverbial girl next door. Bots have suggestive pictures of normal girls a male Tinder user would run across, respond to messages relatively slowly to mimic a real-world online dating interaction, and incorporate deceptively real features of dating site conversation, such as compliments and flirting, to entice would-be suitors to send them phone texts. Bots succeed or fail based on how well their creators understand the art of creating the “illusion of intelligence,” a kind of computational con game that uses the least “intelligent” of artificial agents to nonetheless project humanity to Web surfers going about their business.

Advertisement

What’s the harm, beyond broken hearts? Bot scammers on dating sites are out to directly get money from lovesick targets or utilize them to install malware on unsuspecting users’ computers. Spammers have proved remarkably creative and adaptive. Perhaps the most pernicious form of Tinder-botting is “sextortion,” the act of luring unsuspecting users into webcam sessions with a seemingly innocent hottie only to be recorded and blackmailed. Unless they pay up, the fraudster will ensure the video is either posted online or sent directly to their loved ones.

Criminal botting, however, goes far beyond just online dating sites; bots are a time-honored hacker tool. You’re probably familiar with botnets, groups of bots networked together to simultaneously execute a distributed denial-of-service attack by sending large numbers of messages to a target system. But that doesn’t scratch the surface of the various malicious means available to bot programmers. Bots can generate new encryption to fool security software, be equipped with programmable attack mechanisms, and cooperate with one another in a distributed system to generate complex attacks. Bots may even be combined with computer worms to create hybrid threats. While bots do not replicate or spread on their own, they may piggyback off worms to do so. The 2004 Witty worm, which infected and crashed tens of thousands of servers, is believed to have been launched by a botnet.

But the problematic implications of bots are by no means limited to the criminal domain. Since current Mexican President Enrique Peña Nieto’s election, the Mexican government, for example, has used swarms of bots to censor political discourse. Over the last year in particular, online activism campaigns against the unpopular president disappeared after the Mexican government unleashed bots to crowd out the protest hashtag #YaSeQueNoAplauden as a trending topic. If online campaigns rely on critical mass, judicious deployment of large groups of bots can squash them.

Ukrainian Facebook users also recently requested that Facebook CEO Mark Zuckerberg do something about a “bot army” of Kremlin-controlled bots that spam Facebook with complaints about pro-Ukraine activists’ Facebook pages. As a result, even though they’re not violating FB’s terms of service, activists sometimes find their pages banned. In general, Russia has proved particularly adept at marrying the illusory humanity of spambots and the brute force attack of bot swarms in its own propaganda campaigns. A recent social network analysis of the Kremlin’s bot ecology revealed an extensive array of bots disguised as real users that took to Twitter in an attempt to sway the narrative mere hours after Russian opposition dissident Boris Nemtsov was shot under suspicious circumstances.

While it is important to have those, like philosopher Nick Bostrom or the Future of Life Institute, that plan ahead for both near- and long-term policy contingencies arising from artificial intelligence, it speaks volumes that the real-world bot onslaught seems secondary to strange and impossibly convoluted scenarios like Roko’s Basilisk. Not all bots are bad; there’s a curious and playful side to bot-making seen in ingenious hacks like Twitch Plays Pokémon and “What If” experiments like Civilization botfights. Bots have also, amusingly enough, been incorporated into the time-honored battle to get a reservation at high-end San Francisco restaurants. One may also protest with activist bots preprogrammed to fight the power. But there is also no mistaking the power of bots to harm, whether the aim is scamming, political repression, or criminal bothacking.

Let us not mince words—we are being besieged by the bot hordes. Who can save humanity from the bot menace? All is not lost; the challenge of bot detection drives some of the most interesting research in computer science. This has resulted in tools like Indiana University’s 2014 BotOrNot bot spotter. But bot finding is an arms race; just when companies think they’ve outwitted the botters, the bots adapt. Unfortunately, our twilight struggle against the bots will likely be a long one.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.