You may have already seen one of Tresset’s robots—all of them in this generation are named Paul. In July 2011, video of one sketching its master as part of an exhibit at London’s Tenderpixel went viral. The Pauls are in action once again in a video from October, when they were exhibited at the MERGE Festival in the United Kingdom.
Tresset’s robots use computer vision to identify their subjects—they can recognize faces—and then they spend about 30 minutes on each portrait. (One of his earlier-generation robots, Pete, will actually doodle when there are no faces in sight to draw.) The early versions were crude and involved not physical robots but simulated drawing created with computer-aided drafting programs. But over the past 10 years or so, Tresset and Frederic Fol Leymarie, his co-director at the Aikon project at Goldsmiths University of London, have made tremendous progress. Can you tell which image below was made by a computer and which was created by Tresset before he lost his inspiration?
Robots face some of the same problems in learning to draw as humans do, Tresset says. “When we draw, the difficulty is not in making the lines. The difficulty is in the perception of the subject and the perception of the drawing in progress.” But sometimes, it may help to make it seem that the robot has difficulty in making the lines—Tresset has found that people feel more empathy for the machines when they make human-esque mistakes like crooked or tilted lines. (He calls this “clumsy robotics.”) Humans are inclined to want to identify with robots, especially those with faces: Give a person a bot, and he or she will probably name it. But why is that connection important in robots that draw? Tresset believes that if the person being sketched feels something for the machine wielding the pen, he or she will find the 30-minute sketching process “more touching.” Plus, if the sitter assigns a personality to the robot, it might alter the human’s emotional response to the final product.
Most of us still don’t have robots in the home, but for decades now, we’ve been waiting for machines to do our bidding. Tresset believes that it might be a good idea to imbue all personal robots with some sort of artistic skill to encourage an emotional bond—it might allow for more trust, perhaps, though you can also see how overly identifying with a machine might create some existential questions.
Another project that Tresset has begun work on recently might have more immediately apparent benefits: using Paul-like technology to help those with limited or no use of their limbs to create art. When Tresset lost his own passion for painting, his robots became “a kind of prosthetic for my loss of sensibility,” he told me in an email after the meeting. “[C]reativity can be a great help to overcome sadness, depression, and solitude.”
In recent years, the age-old discussion about whether technology diminishes our humanity has grown increasingly shrill. Yet Tresset demonstrates how, when built and implemented thoughtfully, technology can instead enhance humanity. Many people in his position 10 years ago would have simply let go of their passion for art—or even given up the medication and treatment for the sake of retaining that inspiration. He, however, found another path.
Disclosure: The Azteca Foundation, the foundation arm of the Mexican conglomerate Grupo Salinas, provided funding for my trip to Ciudad de las Ideas.