Can We Train Dogs to Bother Robots Instead of People?

The Citizen's Guide to the Future
Sept. 17 2013 11:37 AM

Can We Train Dogs to Bother Robots Instead of People?

_MG_0989
A robot is a dog's best friend.

Photo provided by Eniko Kubinyi

Domestic dogs most likely diverged from wolves around 32,000 years ago. And they’ve been stealing sandwiches, digging through trash, and sniffing our crotches ever since. But new research out of Hungary shows we may be able to pawn the beasts on robots, so long as the machines faintly resemble a human being. 

Don’t get me wrong: I love the two balls of fluff my wife calls dogs, but I also look forward to a future where a robot digs their toys out from beneath the couch and cuts their medication into quarters. (Have you ever quartered a pill half the size of a TicTac? It’s maddening.)

Advertisement

To be clear, robot doggie care was not the goal of the research published in this month’s issue of Animal Cognition. The Hungarian researchers merely wanted to see if dogs would respond to cues given by robots if it meant finding food—in this case, a small piece of frankfurter. Moreover, they hypothesized that dogs would respond better to a social, humanlike robot than a robot that behaved like a machine.

To test this hypothesis, researchers set up an experiment using a PeopleBot that basically looks like a laptop with arms. For one group, the robot was social, which meant it communicated with the dog’s owner using a woman’s voice and shook the owner’s hand. Since previous studies have shown animals to be capable of triadic interactions, or eavesdropping, the researchers suspected that the dogs would learn something from watching their owners interact with the strange contraption. (Triadic interaction is why my wife and I have to spell out “W-A-L-K” and “O-U-T-S-I-D-E” in the presence of our mutts.) Conversely, the asocial robot got the dog’s attention using beeps and when the owner reached out to shake its hand, the robot just ignored it—which must have been awkward for everyone involved.

After this initial interaction phase, the dogs watched the robot perform a pointing phase, where it called to dog by name (or beep) and pointed at one of two bowls to indicate where the wiener was hidden. If the dogs chose the right bowl, they got to eat the treat. When they got it wrong, the owners were instructed to show the dog the treat in the other bowl but not allow them to eat it. (Note for dogs eagerly awaiting their new robot overlords: Insubordination will not be rewarded.)

So, did the pups learn how to trust the PeopleBot? Well, a little bit. Three dogs out of 20 were able to perform at a consistency higher than chance when placed with a social robot. These results may sound paltry, but precisely zero of the dogs were able to take cues from the asocial bot—indicating that a robot’s behavior can have effect on interaction. In comparison, the experiment was also performed with a human who mimicked the motions of a robot by pivoting and using just the right arm. (Getting kind of meta in here, isn’t it?) The human acted social for both sets, presumably since dogs already understand humans to be social. Apparently, dogs are also quite aware of our potential to lead them to food, as the researchers recorded 53 successful trials out of a possible 74 under these conditions. (This last fact seems to corroborate a concurrent study that concludes only 50 percent of my Pomeranians understand pointing.)

The study also found that dogs spend more time looking at their owners when interacting with a robot. Going back to triadic interaction, it seems the dogs are studying their humans for clues about the machine and, presumably, the location of those precious frankfurters. (Unfortunately, the study did not investigate the efficacy of another popular dog-training technique: sarcasm.) 

Dogs may even prove to be valuable, unbiased assets for robot engineers, since animals don’t carry all the cultural baggage that human test subjects do. Perhaps in the future cats will also show promise for interaction with robots. After all, they sure don’t want anything to do with us.  

Future Tense is a partnership of SlateNew America, and Arizona State University.

TODAY IN SLATE

Politics

Blacks Don’t Have a Corporal Punishment Problem

Americans do. But when blacks exhibit the same behaviors as others, it becomes part of a greater black pathology. 

I Bought the Huge iPhone. I’m Already Thinking of Returning It.

Scotland Is Just the Beginning. Expect More Political Earthquakes in Europe.

Lifetime Didn’t Think the Steubenville Rape Case Was Dramatic Enough

So they added a little self-immolation.

Two Damn Good, Very Different Movies About Soldiers Returning From War

Medical Examiner

The Most Terrifying Thing About Ebola 

The disease threatens humanity by preying on humanity.

Students Aren’t Going to College Football Games as Much Anymore, and Schools Are Getting Worried

The Good Wife Is Cynical, Thrilling, and Grown-Up. It’s Also TV’s Best Drama.

  News & Politics
Weigel
Sept. 19 2014 9:15 PM Chris Christie, Better Than Ever
  Business
Business Insider
Sept. 20 2014 6:30 AM The Man Making Bill Gates Richer
  Life
Quora
Sept. 20 2014 7:27 AM How Do Plants Grow Aboard the International Space Station?
  Double X
The XX Factor
Sept. 19 2014 4:58 PM Steubenville Gets the Lifetime Treatment (And a Cheerleader Erupts Into Flames)
  Slate Plus
Slate Picks
Sept. 19 2014 12:00 PM What Happened at Slate This Week? The Slatest editor tells us to read well-informed skepticism, media criticism, and more.
  Arts
Brow Beat
Sept. 19 2014 4:48 PM You Should Be Listening to Sbtrkt
  Technology
Future Tense
Sept. 19 2014 6:31 PM The One Big Problem With the Enormous New iPhone
  Health & Science
Bad Astronomy
Sept. 20 2014 7:00 AM The Shaggy Sun
  Sports
Sports Nut
Sept. 18 2014 11:42 AM Grandmaster Clash One of the most amazing feats in chess history just happened, and no one noticed.