And over the past eight years, in experiments my colleagues and I conducted for car companies like Toyota and Nissan, we used tracking software to detect hundreds of subjects’ facial movements as they cruised through virtual streets in a simulator. We then built mathematical formulas to predict what we called “the pre-accident face”—the nonverbal pattern that occurs seconds before the driver exhibits bad behavior, including swerving, lane violations, and even collisions. The two most predictive facial features for major accidents: the center of the lower lip and the center of the upper lip.
The automobile companies funded this research in the hopes of incorporating in-car solutions that can prevent crashes. For all the potential benefits of this research, though, it would also be possible to use it in more controversial ways. Just imagine the debate that would ensue if insurance companies were granted access to our nonverbal driving histories, allowing them to charge higher rates to drivers who made telltale expressions. Also consider a project we worked on at the behest of Kyoto-based factory automation company OMRON. With a single camera, we demonstrated that some workplace mistakes can be detected before they occur simply by examining the workers’ facial movements. Again, while tracking nonverbal behavior may help prevent accidents, it could also clue in employers about personal habits that workers might not want to share.
I believe that we’ll see many wonderful applications of this technology, ranging from safety systems to educational tools for struggling students. At the same time, gamers need to be informed that they can be watched, and that how you interact with a game system like the Kinect can potentially reveal a lot about you. As technology becomes more immersive, your video-game persona is not just a character. It’s you.