Future Tense

Why You Should Be Suspicious of That Study Claiming A.I. Can Detect a Person’s Sexual Orientation

637072450
There are some things technology shouldn’t do.

Valery Brozhinsky/Thinkstock

Recently, the A.I. community was left largely stunned when a study released by two Stanford researchers claimed that artificial intelligence could essentially detect a person’s gay or straight sexual orientation. For those of us who have been working on issues of bias in A.I., it was a moment that we had long foreseen: Someone would attempt to apply A.I. technology to categorize human identity, reducing the rich complexity of our daily lives, activities, and personalities to a couple of simplistic variables. The now-infamous study is really only the tip of the iceberg when it comes to the dangers of predictive analytics mapping onto nuanced questions of human identity. Here, using entirely white subjects, all of whom had posted their profiles on dating sites, along with their photographs, the study concluded that its neural technology could predict whether a person was gay or straight roughly over 70 percent of the time (though it depended on gender and how many images were presented).

The study was deeply flawed and dystopian, largely due to its choices of whom to study and how to categorize them. In addition to only studying people who were white, it categorized just two choices of sexual identity—gay or straight—assuming a correlation between people’s sexual identity and their sexual activity.  In reality, none of these categories apply to vast numbers of human beings, whose identities, behaviors, and bodies fail to correlate with the simplistic assumptions made by the researchers. Even aside from the methodological issues with the study, just focus on what it says about, well, people. You only count if you are white. You only count if you are either gay or straight.

“Technology cannot identify someone’s sexual orientation,” stated Jim Halloran, GLAAD’s chief digital officer, in a statement. “What their technology can recognize is a pattern that found a small subset of out white gay and lesbian people on dating sites who look similar. Those two findings should not be conflated.” Halloran continued, “This research isn’t science or news, but it’s a description of beauty standards on dating sites that ignores huge segments of the LGBTQ community, including people of color, transgender people, older individuals, and other LGBTQ people who don’t want to post photos on dating sites.”

Unsurprisingly, the researchers claimed that critics were rushing to judgment prematurely. “Our findings could be wrong,” they admitted in a statement released Monday. “[H]owever, scientific findings can only be debunked by scientific data and replication, not by well-meaning lawyers and communication officers lacking scientific training,” they claimed.

It may be tempting to dismiss this study as a mere academic exercise, but if this sort of research goes unchallenged, it could be applied in terrifying ways. Already, LGBT people are being rounded up for imprisonment (Chechnya), beaten by police (Jordan), targeted for being “suspected lesbians” (Indonesia), or at risk of being fired from military service (United States). What if homophobic parents could use dubious A.I. to “determine” whether their child is gay? If A.I. plays a role in determining categories of human identity, then what role is there for law to challenge the findings of science? What is the future of civil rights in a world where, in the name of science, the act of prediction can go essentially unchallenged? These are not just questions involving science or methodology. Indeed, they can often mean the difference between life, liberty, equality—and death, imprisonment, and discrimination.

The irony is that we have seen much of this before. Years ago, constitutional law had a similar moment of reckoning. Critical-race scholars like Charles Lawrence and others demonstrated how the notion of color blindness actually obscured great structural inequalities among identity-based categories. The ideals enshrined in the U.S. Constitution, scholars argued, that were meant to offer “formal equality” for everyone were not really equal at all.  Indeed, far from ensuring equality for all, the notionally objective application of law actually had the opposite effect of perpetuating discrimination for different groups.

There is, today, a curious parallel in the intersection between law and technology. An algorithm can instantly lead to massive discrimination between groups. At the same time, the law can fail to address this discrimination because the rhetoric of scientific objectivity forecloses any deeper, structural analysis of the bias that lies at the heart of these projects and the discrimination that can flow directly from them.

In this case, the researchers are right that science can go a long way toward debunking their biased claims. But they are wrong to suggest that there is no role for law in addressing their methodological questions and motivations. Instead, the true promise of A.I. does not lie in the information we reveal about one another, but rather in the questions they raise about the interaction of technology, identity, and the future of civil rights.  We can use A.I. to design a better world.  But if we leave civil rights out of the discussion, we often run the risk of reproducing the very types of discrimination we might hope to eradicate.