Microsoft's Tay chatbot for millennials is hilariously inappropriate, but conversational UI is serious business.

Microsoft’s A.I. Chatbot Keeps Hitting On People It Barely Knows

Microsoft’s A.I. Chatbot Keeps Hitting On People It Barely Knows

Future Tense
The Citizen's Guide to the Future
March 23 2016 5:32 PM

Microsoft’s A.I. Chatbot Keeps Hitting On People It Barely Knows

Microsoft's Tay AI chatbot
Microsoft describes Tay as an A.I. chatbot with "zero chill."

Microsoft made an artificially intelligent chatbot programmed to talk like a 19-year-old woman … and it immediately began hitting on people.

It also confessed some secrets …

Advertisement

... and shared some surprising views on sensitive topics.

All of which ensured that the bot made a splash on social media, if perhaps not in precisely the way its creators had intended.

Will Oremus Will Oremus

Will Oremus is Slate's senior technology writer. Email him at will.oremus@slate.com or follow him on Twitter.

Tay, Microsoft says on an explanatory website, is a joint project of the company’s research division and its Bing search team. The bot describes itself, in age-appropriate slang, as “A.I. fam from the internet that’s got zero chill." (The “zero chill” part is definitely accurate, with the possible exception of some occasional Netflix and chill.) You can reach it on “your fave apps”: Twitter, Kik, and GroupMe, for now. The website notes that the bot is “targeted at 18 to 24 year olds in the U.S.,” a key demographic on social chat services, although it often seems to act rather younger than that. Its goal: “to experiment with and conduct research on conversational understanding.”

Asked whether he was worried that Tay might be sending the wrong messages, given its status as a verified Microsoft account on a public platform like Twitter, company spokesman Doug Dawson acknowledged it might have some rough edges to work out.

Advertisement

“We’re learning from this as we go,” he said, adding that Tay is designed for “entertainment purposes” and that her views do not represent Microsoft’s. “She’s going to get smarter the more she talks to you. But you have to start somewhere.”

As others have noted, Tay is not the first Microsoft bot to pose as a teenage girl. It appears to be at least loosely modeled on XiaoIce, a Mandarin-language chatbot billed as “Cortana’s little sister” that has become something of a hit in China since the company introduced it there in 2014.

Despite Dawson’s caution that Tay is for “entertainment purposes,” there is a serious purpose behind these sorts of bots. They’re designed to help companies like Microsoft improve their conversational user interfaces, which are key to the success of voice-based assistants like Cortana, Siri, and Alexa. The better those assistants get at understanding how we talk, and talking back, the more valuable they’ll become as portals to all sorts of online information and services.

Where Microsoft started in gathering the material for Tay’s chats, it won’t say precisely. The bot was built, according to its website, “by mining relevant public data” and by a staff of writers that included improv comedians. It certainly chats like a young, if not always much like a human. At this stage in its development, the bot’s responses vary from witty to nonsensical to wildly inappropriate. To Tay’s credit, it occasionally seems to evince some awareness of its own social awkwardness.

Tay may be making some embarrassing mistakes today. But Microsoft probably figures that it’s better to make them now, on an experimental chatbot, than a few years down the road, when a program like Cortana is making appointments, buying things, and maybe even answering emails and chats on our behalf.

Future Tense is a partnership of SlateNew America, and Arizona State University.