Future Tense

We Need to Regulate Technology That Can Detect Your Emotions

What if Glass can tell users how their conversation partners are feeling?

Photo by Justin Sullivan/Getty Images

With the FBI’s Next Generation Identification (NGI) program currently holds biometric information on as much as one-third of the country. By next year, the FBI has announced, it could have 4.3 million images for facial recognition in its database, taken outside the realm of crime-related activity. Overall, the system will likely have over 52 million total images by next year. Four states are currently utilizing this database, too, with 21 states considering using or currently testing the technology for their police force.

Meanwhile, a new company called Emotient is paving the way for facial expression recognition. It’s basically software that could be applied to something like Google Glass or a camera of another type that can recognize how you’re feeling. There are other emotion-sensing softwares on the market, like FaceReader by Noldus, but Emotient is getting the most attention due to its high level of funding ($6 million added in March) and its interest in pairing with Google Glass. Emotient says its engine could be used for health care and retail purposes, but it’s easy to imagine how its applications could evolve once it hits the market—especially for law enforcement use. For instance VibraImage, an anxiety-sensing software, was used at the Sochi Olympics to judge the mental state of the fans.

The same way facial recognition could be used to track where you are, who you’re with and what you’re doing, expression recognition could be used to monitor how you’ve been feeling over time and what your general state of mind seems to be. Imagine: A cop knocks on your door during the investigation of a crime, wearing Google Glass, and an algorithm tells him that you seem nervous. You aren’t nervous because you’re guilty, but now he feels it is appropriate to enter your home with probable cause. Alternatively, you’re in a job interview and hesitate to answer a question. The interviewer could register that you seem to be holding something back and refuse you the job.

Technology moves faster than legislation, and that’s very apparent with the painfully slow-moving process to create good regulation for facial recognition. Not very many laws are currently on the books—it’s a patchwork. For instance, the Illinois Biometric Information Privacy Act of 2008 orders businesses to have a plan for the limited storage and eventual destruction of biometric information. Maine has created laws prohibiting the use of facial recognition by any state authorities, and Wisconsin has now banned its use in law enforcement. Twelve others states have not started using it, but they don’t have any laws against it.

Nationally, the ACLU is pushing the U.S. Department of Commerce to pass a “voluntary code of conduct” (note the word voluntary) that will simply illustrate what should and should not be allowed concerning the use of facial recognition in the private sector. Proposed tenets of the code of conduct include giving consumers the choice of where facial recognition can be used or making it necessary to alert them when it is. Given how far behind we are on regulation of facial recognition, it doesn’t take a sophisticated algorithm to ID the concern on my mug.

“I think I’d be very skeptical of [emotion-detecting technology’s] efficacy, but that doesn’t mean it would be harmless,” says Jay Stanley, a senior policy analyst with the ACLU’s Speech, Privacy and Technology Project.* “One could imagine scenarios when it’s used to try to figure out specific things about you: how you react to stress, whether you have a sense of humor, whether you’re suffering from depression, … whether you are a docile and compliant person or a potential trouble maker [and] whether you’re prone to anger,” he says. Stanley also pointed out that the government could track your emotions over a set time and come to think of you as an unstable person, even if you’re just having a rough month.

Alessandro Acquisti, an associate professor of information technology and public policy at the Heinz College, Carnegie Mellon University, proposed another possible problem. “Even the creators of these algorithms do not eventually know how [they] work … There are now some algorithms that evolve over time based on more and more data you feed into the system, and they evolve in matters that are no longer fully understandable by their programmer.” Furthermore, Acquisti says that the companies that provide the technology to the government or other businesses would likely refuse to let customers or the public know how the technology works in the first place, citing trade secrets.

Like Stanley, Acquisti believes that the likelihood of emotion-detection technology being infallible is very low. “When it comes to things such as recognizing emotion, we will never get 100 percent accuracy,” Acquisti says.

So what type of regulation do we need here? I’d like to see policies that would prevent law enforcement from using expression recognition altogether. Barring that, it should be used only in certain situations, and the information collected shouldn’t be stored in a database for indefinite access. As for its use in the private sector, citizens should always know when it’s being used, and there should be absolutely no use of it during employment screenings.

Now, whether you and your friends want to permit its use during poker games is your own business.

*Correction, June 13, 2014: This post originally misquoted the ACLU’s Jay Stanley. He said that problems with the efficacy of emotion-detecting technology “doesn’t mean it would be harmless,” not “doesn’t mean it wouldn’t be harmless.”