Customers who get angry at robots take out their frustrations on human employees.

When Robots Make Us Angry, Humans Pay the Price

When Robots Make Us Angry, Humans Pay the Price

Always Right
A pop-up blog about customer service.
Sept. 14 2017 12:48 PM

When Robots Make Us Angry, Humans Pay the Price

Photo illustration by Slate. Photos by Thinkstock.
“REPRESENTATIVE!!”

Photo illustration by Slate. Photos by Thinkstock.

Always Right is Slate’s pop-up blog exploring customer service across industries, technologies, and human relationships.

The customer service industry is teeming with robots. From automated phone trees to touchscreens, software and machines answer customer questions, complete orders, send friendly reminders, and even handle money. For an industry that is, at its core, about human interaction, it’s increasingly being driven to a large extent by nonhuman automation.

Advertisement

But despite the dreams of science-fiction writers, few people enter a customer-service encounter hoping to talk to a robot. And when the robot malfunctions, as they so often do, it’s a human who is left to calm angry customers. It’s understandable that after navigating a string of automated phone menus and being put on hold for 20 minutes, a customer might take her frustration out on a customer service representative. Even if you know it’s not the customer service agent’s fault, there’s really no one else to get mad at. It’s not like a robot cares if you’re angry.

When human beings need help with something, says Madeleine Elish, an anthropologist and researcher at the Data and Society Institute who studies how humans interact with machines, they’re not only looking for the most efficient solution to a problem. They’re often looking for a kind of validation that a robot can’t give. “Usually you don’t just want the answer,” Elish explained. “You want sympathy, understanding, and to be heard”—none of which are things robots are particularly good at delivering. In a 2015 survey of over 1,300 people conducted by researchers at Boston University, over 90 percent of respondents said they start their customer service interaction hoping to speak to a real person, and 83 percent admitted that in their last customer service call they trotted through phone menus only to make their way to a human on the line at the end.

“People can get so angry that they have to go through all those automated messages,” said Brian Gnerer, a call center representative with AT&T in Bloomington, Minnesota. “They’ve been misrouted or been on hold forever or they pressed one, then two, then zero to speak to somebody, and they are not getting where they want.” And when people do finally get a human on the phone, “they just sigh and are like, ‘Thank God, finally there’s somebody I can speak to.’ ”

Even if robots don’t always make customers happy, more and more companies are making the leap to bring in machines to take over jobs that used to specifically necessitate human interaction. McDonald’s and Wendy’s both reportedly plan to add touchscreen self-ordering machines to restaurants this year. Facebook is saturated with thousands of customer service chatbots that can do anything from hail an Uber, retrieve movie times, to order flowers for loved ones. And of course, corporations prefer automated labor. As Andy Puzder, CEO of the fast-food chains Carl’s Jr. and Hardee’s and former Trump pick for labor secretary, bluntly put it in an interview with Business Insider last year, robots are “always polite, they always upsell, they never take a vacation, they never show up late, there’s never a slip-and-fall, or an age, sex, or race discrimination case.”

Advertisement

But those robots are backstopped by human beings. How does interacting with more automated technology affect the way we treat each other? When machines fail, it’s usually the most immediate human operator who has to take responsibility for the malfunction, whether or not that person had any say in building the failing system. A customer service agent who finally answers your call had zero to do with the poorly designed phone menu you just wasted 15 minutes navigating. A cashier who previously only had to deal with one impatient shopper at a time might now be in charge of overseeing 10 self-checkout kiosks at once. When the kiosks inevitably malfunction, not only does that cashier have to puzzle through how to get them working again: She now has to deal with 10 frustrated customers at once, too.

It’s not only interacting with machines that don’t work that can make us unfriendly toward other humans. Machines that work perfectly fine can inspire people to act less humane toward each other. Take Amazon’s Alexa, which is basically a customer service robot designed to live in your kitchen. Last year, a parent wrote about how his child’s behavior changed after they brought an Alexa home. Amazon’s tabletop smart speaker doesn’t require “please” or “thank you” to process commands, which he said was making his kid rude and demanding to other people as well.

“We know that people treat artificial entities like they’re alive, even when they’re aware of their inanimacy,” writes Kate Darling, a researcher at MIT who studies ethical relationships between humans and robots, in a recent paper on anthropomorphism in human-robot interaction. Sure, robots don’t have feelings and don’t feel pain (not yet, anyway). But as more robots rely on interaction that resembles human interaction, like voice assistants, the way we treat those machines will increasingly bleed into the way we treat each other.

This matters now because in the future there are going to be even more robots than there are today. They’ll be in our homes, at work, school, in stores, in the sky, and on our sidewalks. And robots are becoming more human-like every day: Google’s voice recognition software can now understand English with 95 percent accuracy, and researchers recently developed robotic skin that’s more sensitive than a human hand.

And it matters because many of the machines being built for human interaction are designed not only to help us, but to need humans to help them, too. The industrial robotics market is expected to nearly triple in less than 10 years, and collaborative robots made to work alongside people, or co-bots as they’re often called, are expected to make up one-third of that growth, according to data from Loup Ventures. Workers in Amazon’s robotized warehouses don’t need to walk as far or carry as many heavy boxes—robotic shelves that rove the warehouse floor do that. But humans are still needed to do things that the robots can’t do well, like pick odd-shaped objects off shelves or improvise when necessary.

That’s not that different from the changes happening in customer service, except that in customer service you, the customer with the weird question only a human can answer, are the odd-shaped box. As more of these machines are brought on to help humans, whether on the factory floor or at a customer service counter, Elish warns that companies that use and design them need to take the roles of the humans who work with them seriously from the start. That means rigorous user testing and field work; asking people who will be tasked to collaborate with the machines, including customers, about their experience; and programing robots to be as easy to work with as possible. Physically, robots might be designed to move more slowly or be constructed from softer materials; on the software side, they could be programed to deliver more information without requiring customers to ask for it, or to provide an easy route to connect with a person. (Or another option is just to hire more humans, since even a nicer robot isn’t a person with empathy, patience, and understanding who can interpret problems in a way only a person can.)

The great promise (and the great fear) of robots has always been that they’ll replace human labor. But if companies don’t carefully consider how humans interact with the robots that work for and alongside them, we may find we’re becoming a little less human, too.