Future Tense

How Movie Magic Could Help Translate for Deaf Students

Life-like computer avatars can translate written and spoken words into sign language.

Matt Huenerfauth, right, records data from someone performing American Sign Language.

John Myer/RIT

The technology behind the cooking rats in Ratatouille and the dancing penguins in Happy Feet could help bridge stubborn academic gaps between deaf and hearing students. Researchers are using computer-animation techniques, such as motion-capture, to make life-like computer avatars that can reliably and naturally translate written and spoken words into sign language, whether it’s American Sign Language or that of another country.

English and ASL are fundamentally different languages, said computer scientist Matthew Huenerfauth, director of the Linguistic and Assistive Technologies Laboratory at the Rochester Institute of Technology, and translation between them “is just as hard as translating English to Chinese.” Programming avatars to perform that translation is much harder. Not only is ASL grammar different from English, but sign language also depends heavily on facial expressions, gaze changes, body positions, and interactions with the physical space around the signer to make and modify meaning. It’s translation in three dimensions.

About three-quarters of deaf and hard-of-hearing students in America are mainstreamed, learning alongside hearing students in schools and classes where sign-language interpreters are often in short supply. On average, deaf students graduate high school reading English—a second language to them—at a fourth-grade level, according to a report out of Gallaudet University, the premier university for deaf students. That reading deficit slows their learning in every other subject. It also limits the usefulness of closed captioning for multimedia course material.

“For kids, captioning is almost a waste of time,” said Harley Hamilton, a computer scientist at Georgia Tech affiliated with the Center for Accessible Technology in Sign, a joint project of the university and the Atlanta Area School for the Deaf. At the same time, he said, existing sign-language avatars aren’t ready for prime time, citing studies that show deaf students understand between 25 and 60 percent of what these avatars sign.

Among the best performing sign-language avatars is Paula, named after DePaul University, where she’s being developed for a myriad of potential uses, ranging from doctors’ offices to airport security checkpoints to schools. A team of animators, computer scientists, and sign-language experts at DePaul builds Paula’s skills one linguistic challenge at a time. For instance, “role shifting” in a story with multiple characters, which human signers indicate by turning their bodies to the side in a fluid, subtle sequence that starts with the eyes, followed by the head, neck, and torso. The researchers develop mathematical models of how bodies naturally make these moves, and use these models to automate critical parts of Paula’s signing, a process called keyframe animation.

“You would think it would be easier, with all the amazing animation in movies,” said Rosalee Wolfe, a lead researcher on the Paula project and professor of computer graphics and human-computer interaction. “But once a movie is made, it’s frozen in time. The avatar must respond to the immediate situation. You can’t just make an animated phrase book. You need a much deeper understanding of the language, grammar, and human kinesiology.”

Each bit of Paula that can be fine-tuned with mathematical modeling is known as a “polygon,” and there are more than 17,000 polygons in her eyes alone, more than 8,000 controlling her mouth, and a mere 4,000 for each hand. Plus, the human body is never completely still, so the researchers need to mix in enough random movements to keep Paula “alive” without making her seem jittery or shaky.

The team at Huenerfauth’s lab is occupied with similar nuances, but they take a different approach to computer animation, called motion capture. Their process starts with humans signing in special gloves and other clothing covered with tiny sensors that turn every movement into data that can be used with mathematical models to solve a linguistic challenge, such as how a signer uses the space around her body to “locate” certain objects she’s describing, creating invisible reference points that, for example, alter verb signs linked with direct objects.

“We publish the math, showing how we address these issues, and share all our motion-capture recordings with the world,” said Huenerfauth, so that other labs can replicate and build off their findings. While it may take decades before real-time sign-language translation avatars are available to deaf students, other applications of this research could be ready much sooner, such as avatars translating the written text of online educational materials into sign language at the press of a button.

The signing avatars can also be used in apps and games to help deaf children get early exposure to language, which is critical for their cognitive development. More than 90 percent of deaf children are born to hearing parents who don’t sign, said Hamilton, which means, “a lot of deaf children grow up with almost no language until they hit school. And that has created language deprivation.”

Parents talking and reading to hearing children helps to develop the language- processing parts of their brains that will later help them to communicate and to learn. Recent studies indicate that early sign language can develop these same brain areas, and that the more proficient deaf and hard-of-hearing students are in sign language, the better they do academically.

Hoping to bolster the sign-language skills of young children, Hamilton and fellow CATS researchers are creating a game called CopyCat, in which kids communicate with a sign-language cat named Iris, directing the cat to play with toys or take other actions to win the game. A motion-sensing camera captures the child’s signs, and if they’re incorrect, Iris stops and looks baffled. The developers are still working out the kinks. For instance, the current version of CopyCat doesn’t do well with signs that require people to cross their hands.

Meanwhile, researchers at the Motion Light Lab at Gallaudet are creating sign-language avatars who tell nursery rhymes written for deaf children. (Rhyming is replaced by repetitive rhythms in the signs.) The project uses motion-capture technology developed by a French animation and effects studio called Mocaplab, which is itself working on a sign-language translation avatar and an app in which an avatar teaching the user sign-language can be rotated for a first-person point-of-view for each sign.

“A lot of people think ‘it’s just movement,’ ” said Rémi Brun, founder and CEO of Mocaplab. “But, movement can be just as subtle, rich, and powerful as the human voice.”

This story was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Read more about Blended Learning.

Future Tense is a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.