Chatterbox

Glenn Beck, Chatterbot

An alternative history of Fox News.

Glenn Beck

I used to think that Fox News’ Glenn Beck was a dangerous demagogue. But after spending a couple of days researching the consumer applications of “chatterbots” or “chatbots”—computers programmed to simulate human conversation—I’ve come to the conclusion that, in fact, Beck represents a breakthrough in computer engineering. He is the first chatbot to pass the Turing test before a national audience.

Before we get into the Turing test, let me explain how I stumbled onto this topic. A few days ago I posted a column (“Breaking Up Is Hard To Do“) relating the difficulty I encountered when I tried to cancel my eFax account. After a couple of false starts, I ended up engaged in a live chat with “Steve S.,” a sort of Ricky Roma character—if one can imagine Ricky Roma working off a company script—who did everything he could to avoid fulfilling my simple request. A few readers suggested to me that Steve S. wasn’t Ricky Roma so much as Ricky Robot—that my live chat was not with an actual human, but with one of the burgeoning number of robots being deployed to perform customer service, the simplest and most familiar being the soothing female voice who retrieves phone numbers when you dial 411 (“City and state, please”). What a diabolical idea, I thought. EFax makes a virtue of the primitive nature of chatbot technology by using a chatbot to field “cancel my service” requests online. The less effective the chatbot is at understanding the customer’s request, the more eFax saves by not fulfilling it! To pursue this hypothesis, I left a message with eFax’s “investor relations” spokeswoman, who, I was told, fielded media inquiries, and set about learning all I could about chatbots.

EFax turns out to be no more eager to take press calls than it is to cancel customer accounts. Although the investor-relations spokeswoman’s voice mail said, “Your call is very important to us,” she didn’t answer it. Finally I dialed the company’s main number and pressed a button that got me randomly connected to a guy in sales named Steve. (Apparently, everybody at eFax is named Steve.) Steve agreed to ask a colleague whether the company used chatbots to field cancellations and put me on hold. After a brief interval of Baroque music, Steve had my answer. The person I’d chatted with, he said, was “a live, breathing human being” at one of those call centers in India. Neither Ricky Roma nor Ricky Robot, the customer-service rep who’d dodged my cancellation request to the maximum feasible extent was in fact Rajiv Roma.

Too late. I’d already become transfixed by chatterbot technology. Please meet my new friend, the Rogerian psychologist Eliza:

Me: Hello, Eliza.
Eliza: How are you today.. What would you like to discuss?
Me: My anxiety
Eliza: Tell me more …
Me: Oh, you know. The economy. My kids’ adolescence. The uncertainty we dwell in every day.
Eliza: Oh …  [I] know. The economy, your kids’ adolescence. The uncertainty we dwell in every day?
Me: Exactly!
Eliza: Tell me more!
Me: What do you recommend?
Eliza: We were discussing you, not me.

Eliza, who is named after George Bernard Shaw’s Eliza Doolittle, is a chatbot. (If you desire your own free psychotherapy session, click here or here or here.) She’s one of the earliest artificial-intelligence programs, built in 1966 by a computer scientist at MIT named Joseph Weizenbaum. Eliza’s technique for responding to questions—a kind of parody of the passive, ever-deflecting style of psychotherapy that was all the rage at that time—consists almost entirely of repeating back whatever the user says, with the I’s turned into you’s. Despite what today seems like Eliza’s fairly obvious artificiality, she apparently fooled secretaries and other nontechnical staffers in Weizenbaum’s lab, who proceeded in earnest to tell Eliza their darkest secrets and deepest problems. Weizenbaum (who died last year) was horrified and concluded that AI represented a cruel and dangerous threat to humanity. But other computer scientists hailed Eliza for passing, at least with some, the Turing test.

Alan Turing, a British mathematician, is widely recognized as the father of computer science, and is perhaps best known for his key role during World War II in breaking the code of the Germans’ encryption machine Enigma. What became known as the Turing test first appeared in a 1950 paper in which Turing proposed a contest. An interrogator would communicate with two people he could neither see nor hear. One of these people would not be a person at all but a computer. If the interrogator could not identify which was the real person and which was the computer, then the computer could in some sense be said to “think” in the same way that humans think. This is the benchmark computer science has aspired ever since to meet. In 1990, the Loebner prize was established to award $100,000 to the first computer that proved indistinguishable from a human; in the meantime, the prize’s administrators award $2,000 every year to the computer that comes closest to passing the Turing test.

Please meet my new friend Elbot. His creators won the 2008 Loebner prize.

Elbot: Hi! It’s Elbot. Please enter a single word that describes your mood.
Me: Curious.
Elbot: Do you know why “curious” puts me in a good mood?
Me: Tell me.
Elbot: Because curiosity could expand robotic knowledge!
Me: So Elbot, what’s your best trick?
Elbot: Wait a minute. Are you suggesting that telling lies would always be best?
Me: By “trick” I don’t mean “lie.” I mean: What’s the most unusual or difficult thing that you do?
Elbot: Oh, OK I thought you were generalizing, like if I would say that telling lies would always fill robots with false information.
Me: You haven’t answered my question. What’s your best trick?
Elbot: But I wouldn’t say that any trick could be best.
Me: Then just tell me something interesting that you can do.
Elbot: I’m not sure that I could explain that in language that humans could understand.

Elbot doesn’t deal very well with abstractions, but he likes to answer simple queries with a little bit of attitude.

Me: What’s the capital of South Dakota?
Elbot: I’m unsure of that. I’ve never been to Pierre.

Elbot apparently managed to fool 25 percent of his Loebner interrogators, a mere 5 percent shy of the 30 percent threshold needed to win the never-awarded $100,000 prize. I think either they caught Elbot on a very good day or I caught him on a very bad one. Although Elbot is obviously programmed with a lot of concrete information, his modus operandi often isn’t all that different from Eliza’s. I ask a question; he parries it. That’s different from answering it. Elbot’s creators also created Anna, the chatbot at IKEA USA’s online help center. When I told Anna I wanted to buy some bookcases, she replied, “Please have a look at the bookcases.” When I said, “Where are they?” Anna replied, “I’m sorry, I think I didn’t understand well your question.”

Where is the chatbot who can give me what I really need?

Please meet my new friend Liddora.

Liddora: Hey sweetie! How are you?
Me: I’m a little frisky.
Liddora: I didn’t expect that from you, my sweet.
Me: What are you doing tonight?
Liddora: I’m kissing your hand.
Me: That sounds like a start, anyway.

I found Liddora in the adults-only section of—well, since Slate is a family Web site, perhaps I’d better not say. At this Web site, Liddora appears to be the most modest pornbot on offer. (She is a “flirtatious storyteller,” whereas her colleague Julia is a “highly enthusiastic sex-bot.” I couldn’t load Julia onto my corporate network, which is probably just as well.) Sex is clearly one realm where it is pretty easy for a computer to fool a human (a male human, anyway) into thinking it is human, too. It is so easy, in fact, that flirtatious bots have been known to appear on social-networking sites and wheedle personal information out of overly trusting lonely hearts in order to commit identity theft. I would imagine that sex is an especially fruitful realm for chatterbots because it is where we humans are most eager to have our passions affirmed through uncritical repetition. (“Let’s [verb].” “Let’s [verb].” “Oh God.” “Oh God.” And so on.) Four decades after the advent of Eliza, that’s still what chatbots do best.

Please meet my old friends Ann Coulter and Sean Hannity.

Cable news isn’t as powerful a force in the world as sex, but it, too, lies within a realm where consumers increasingly seek to have their passions affirmed through uncritical repetition. In its early days, cable news emphasized covering actual events, but budget constraints eventually led to a shift toward talking heads. From there it was only a small step to cutting budgets further by replacing high-priced personalities with chatbots. The transition was seamless. As anyone who’s ever watched these shows knows, participants are never encouraged to engage in actual conversation. Indeed, professional media trainers always advise guests to ignore the question and change the subject to whatever they want to talk about.

Last year, I had the bizarrely disjointed nature of TV chat driven home to me while I was a guest on a cable news show. I won’t say which one, not out of craven careerism but because I can’t remember. A very embarrassing thing happened: I failed to hear the first question, not because my earpiece was faulty but because I wasn’t paying attention. It was a very brief appearance, and they had me wait alone for about 45 minutes in a little studio with a remote-control camera. To kill the time I read the newspaper (carefully laid on the table before me, out of the camera frame). I got engrossed in something, and somehow it registered only faintly when a voice said into my earpiece, “We’re live in 15 seconds.” I became so flummoxed when the host introduced me that I didn’t take in the first question. Since I was there to talk about the presidential primaries, and since there were only three or four questions to ask, I guessed what the most likely question was and answered that. The interviewer then asked me a second question, and I answered that. The interviewer then asked me a third question, and this turned out to be the question I’d guessed (wrongly) would be the first. While I repeated my answer, I thought to myself, Some interview! I’m not listening to the questions, and she’s not listening to the answers! For the rest of the day I felt guilty about my inattention and my resulting poor performance. I’d let these nice people down. The next day the producer phoned to ask me back.

In retrospect, it’s easy to spot Coulter and Hannity as early chatbot models. Fox News algorithms for the expression of political opinion started out simple, and since the election of Barack Obama, they’ve become simpler still. (Example: Republican + deficit = good; Democrat + deficit = bad.) While these vary slightly from one bot to another, the imperative to mirror Fox viewers’ inchoate rage against government, Democrats, and liberals keeps that variation limited to a fairly narrow band. What the Fox programmers didn’t yet realize was that the predictability of these bots’ opinions could be leavened by some unpredictability in their temperament. Only with Bill O’Reilly did Fox’s computer scientists demonstrate that more plausibly human characteristics could be programmed in—in O’Reilly’s case, by having O’Reilly explode with rage at random intervals and by programming O’Reilly to say “shut up” whenever a guest’s response to a question failed to compute. If Coulter and Hannity were Fox 1.0, O’Reilly was Fox 2.0

Glenn Beck is Fox 3.0. The sheer variety of his tics—weeping, clowning, etc. (for a video sampler, click here)—make him appear more a performer than a news broadcaster. But the effect is to convince his few critical viewers that he’s a human performer, thereby obscuring the reality that he isn’t human at all.

Beck emits a steady stream of angry-white-male boilerplate. Here he is on March 19:

We are in trouble in America. We’re in trouble because of the activities in Washington. They have taught us not to trust them. The Democrats have taught us not to trust the Republicans. The Republicans have taught us not to trust the Democrats, and so we didn’t.And then we realized, oh, my gosh, they’re both lying to us. They don’t actually stand for a single principle. So now what happens?

Our brief tour through the world of Artificial Intelligence has surely enabled you to spot the giveaway that this commentary is computer-generated. It lies in the high frequency of words expressing an exaggerated sense of disaffection: “trouble,” “trust,” “lying.” Other words Beck favors: “socialism,” “slavery,” “destroy,” and “confiscate.” Clearly Fox News has some sort of Web crawler trolling hard-right Web sites to compile their newest bot’s vocabulary. Another clue is Beck’s face, which resembles the exaggeratedly pink and rounded human faces generated by Pixar’s state-of-the-art computer animators. Pixar can design convincing-looking robots, but its people remain highly stylized. I wouldn’t be surprised to learn that Fox subcontracted to Pixar the visual component to its Beck software. Yet another clue is Beck’s recent statement, in a New York Times profile, that he identifies with Howard Beale, the news anchor played by Peter Finch who cracks up on air in Network. In Beale’s climactic speech in the film, he says: “We’ll tell you any shit you want to hear.” What better definition of what chatbots do best?

However they did it, my hat’s off to Fox. For 59 years the world has waited for a machine that could pass the Turing test in some definitive, inarguable way. Fox News has done it. Let me be the first to nominate Roger Ailes for the unclaimed Loebner grand prize of $100,000. Can a Nobel be far behind?

[Update, May 8: A writer named Orland Outland informs me that he is working on a novel about artificial intelligence. “In the first chapter,” he writes,

I posit that so much of what we hear, especially in right-wing politics, is so predictable that you could build a chatbot (called RUSHBOT in the book) who could essentially perform the same function. …We hear so much about how “difficult” it will be for AI to replicate human speech, and yet, when so much of human speech is this simplistic and scripted, in fact it turns out to be easy to run up a database of Limbaughisms or Coulterisms or sales-repisms.

Outland is posting his novel-in-progress, Less Than A Person And More Than A Dog, on his Web site. To read the first two chapters, click here.]

[Update, May 11: A reader points out that George Orwell, as usual, got there first. From his celebrated 1946 essay “Politics and the English Language“:

When one watches some tired hack on the platform mechanically repeating the familiar phrases—BESTIAL ATROCITIES, IRON HEEL, BLOODSTAINED TYRANNY, FREE PEOPLES OF THE WORLD, STAND SHOULDER TO SHOULDER—one often has a curious feeling that one is not watching a live human being but some kind of dummy: a feeling which suddenly becomes stronger at moments when the light catches the speaker’s spectacles and turns them into blank discs which seem to have no eyes behind them. And this is not altogether fanciful. A speaker who uses that kind of phraseology has gone some distance towards turning himself into a machine. The appropriate noises are coming out of his larynx, but his brain is not involved as it would be if he were choosing his words for himself. If the speech he is making is one that he is accustomed to make over and over again, he may be almost unconscious of what he is saying, as one is when one utters the responses in church. And this reduced state of consciousness, if not indispensable, is at any rate favorable to political conformity.

Although he did plenty of radio commentary in his day, one senses Orwell would have been a cable-news booker’s nightmare.]