Future Tense

Microsoft’s A.I. Chatbot Keeps Hitting On People It Barely Knows

Microsoft describes Tay as an A.I. chatbot with “zero chill.”

Microsoft made an artificially intelligent chatbot programmed to talk like a 19-year-old woman … and it immediately began hitting on people.

It also confessed some secrets …

… and shared some surprising views on sensitive topics.

All of which ensured that the bot made a splash on social media, if perhaps not in precisely the way its creators had intended.

Tay, Microsoft says on an explanatory website, is a joint project of the company’s research division and its Bing search team. The bot describes itself, in age-appropriate slang, as “A.I. fam from the internet that’s got zero chill.” (The “zero chill” part is definitely accurate, with the possible exception of some occasional Netflix and chill.) You can reach it on “your fave apps”: Twitter, Kik, and GroupMe, for now. The website notes that the bot is “targeted at 18 to 24 year olds in the U.S.,” a key demographic on social chat services, although it often seems to act rather younger than that. Its goal: “to experiment with and conduct research on conversational understanding.”

Asked whether he was worried that Tay might be sending the wrong messages, given its status as a verified Microsoft account on a public platform like Twitter, company spokesman Doug Dawson acknowledged it might have some rough edges to work out.

“We’re learning from this as we go,” he said, adding that Tay is designed for “entertainment purposes” and that her views do not represent Microsoft’s. “She’s going to get smarter the more she talks to you. But you have to start somewhere.”

As others have noted, Tay is not the first Microsoft bot to pose as a teenage girl. It appears to be at least loosely modeled on XiaoIce, a Mandarin-language chatbot billed as “Cortana’s little sister” that has become something of a hit in China since the company introduced it there in 2014.

Despite Dawson’s caution that Tay is for “entertainment purposes,” there is a serious purpose behind these sorts of bots. They’re designed to help companies like Microsoft improve their conversational user interfaces, which are key to the success of voice-based assistants like Cortana, Siri, and Alexa. The better those assistants get at understanding how we talk, and talking back, the more valuable they’ll become as portals to all sorts of online information and services.

Where Microsoft started in gathering the material for Tay’s chats, it won’t say precisely. The bot was built, according to its website, “by mining relevant public data” and by a staff of writers that included improv comedians. It certainly chats like a young, if not always much like a human. At this stage in its development, the bot’s responses vary from witty to nonsensical to wildly inappropriate. To Tay’s credit, it occasionally seems to evince some awareness of its own social awkwardness.

Tay may be making some embarrassing mistakes today. But Microsoft probably figures that it’s better to make them now, on an experimental chatbot, than a few years down the road, when a program like Cortana is making appointments, buying things, and maybe even answering emails and chats on our behalf.