But might anti-Mormon sentiment actually be even higher still? Pollsters have little confidence that voters answer such questions about bias candidly. Through the 1980s, pre-election surveys were repeatedly undercut by the so-called Bradley effect, in which voters—apparently self-conscious about being seen as racist—lie to callers, overstating a minority candidate’s support. The fear that Barack Obama would fall victim to such deception bedeviled Democrats throughout 2008, so they looked for ways to measure what political scientists call “implicit attitudes”—socially acceptable views that can serve as a proxy for darker ones.
In late 2007, the Ohio Democratic Party set out to design a model to predict which voters in the state would be good targets for the party ticket. The presidential primary was unsettled, and many Democrats thought the complexion of their coalition could change depending on whether Obama or Hillary Clinton was the nominee. They needed to identify those Ohioans who might support a generic Democratic candidate, but would hold back if that nominee were a woman or an Black person. Party operatives convened focus groups to feel around for questions where a voter’s response could work as a tell, and then used polling to see which questions and answers actually correlated with a how a person chose to vote. Ultimately, Ohio Democrats developed two oblique poll questions: “Do you agree we were better off as a society when women were expected to stay home and men were expected to make a living?” and “Do you think sometimes African-Americans overestimate the impacts of discrimination?”
The party conducted a massive survey, far larger than the typical media poll, asking voters the questions and identifying—across hundreds of granular variables—the characteristics of the people who said yes to one or both of them. Algorithms then churned through databases full of personal information to look for patterns that matched other attitudes, behaviors, or demographic traits. The result was a prediction—for every Ohio voter—about whether he or she, too, would answer yes to each question. After the primary, the Clinton and Obama campaigns returned their lists of identified supporters to the party, and analysts saw that their predictions had been largely borne out: Those who believed in old-fashioned gender roles had been less likely to back the female candidate and those suspicious of racial-discrimination claims withheld support from the black candidate.
Is there a similar “implicit attitudes” question about Mormonism that has such predictive power across the electorate? It’s not clear. Both academics and political operatives have spent far less time taking the measure of anti-Mormon public opinion than attitudes on race, but public-opinion research suggests that different voters have vastly different reasons for distrusting members of the Church of Latter-day Saints in high office. While pundits may dwell on statistics showing that a majority of evangelicals respond negatively when public polls ask “Is Mormonism a Christian religion?” such answers illuminate little about vote choice. (Judaism is not a Christian religion, and it is easy to imagine Eric Cantor winning evangelical votes.) At the same time, polls show resistance to electing a Mormon president is robust among liberals, too; non-evangelicals could resent the faith’s cultural traditionalism, a matter that is completely unrelated to whether it is a Christian religion.
Researchers have recently found another way to go about this, one that is even more sensitive to respondents who might want to hide bias and does not rely on proxy concerns or coded issues. In the late 1990s, a pair of Harvard political scientists probing opinions about affirmative action worried that few people would honestly answer a pollster’s questions on such a delicate subject. Instead, the researchers turned their surveys into an experiment, randomly dividing their sample into two groups. Each group of subjects was provided with a list of statements and asked merely to identify how many they agree with, rather than having to weigh in on specific statements directly. One group’s list would include an extra, “target” item—“I don’t approve of affirmative action,” say. Then researchers would compare the responses of each group, and attribute the difference in the number of statements chosen to the presence of the target item.