“Without Facebook, Trump wouldn’t have won,” Theresa Hong, one of the main brains behind the Trump campaign’s digital efforts, told the BBC earlier this year. She’s right.
Today, data-driven ad targeting on the web is essential to any political campaign. That’s especially true for the presidential election, when candidates must reach many millions of voters. People use social media all day, and it’s one of the main ways Americans get their political information. And when one candidate uses a voter-targeting tactic—buying voter lists, booking local TV spots, buying crate loads of Facebook ads—others have to do it too. No candidate in a tight campaign can afford to not use every possible tool to get in front of voters.
More than $1.4 billion was spent on online advertising by candidates in the 2016 election. Campaigns and super PACs supporting candidates poured money into digital ads that were fine-tuned to target specific demographics. Women in their 40s with kids who are Christian in the Midwest could receive different ads than a young male teen gamer in New York City—and they did, every time they hopped onto Twitter, Facebook, or any website that runs Google ads. Online ads are able to hit such specific subsets of people because they rely on a constellation of data collected from your browsing activity or things you like on Facebook, your location, credit card data, personal connections with friends, and other morsels of information. In theory, this corporate data collection could be subject to tighter regulation—for example, concerning the amount of time a user’s information is held.
Yet, as politicians increasingly rely on data-driven advertising and microtargeting of their constituents to win elections, it’s unlikely that the U.S. will ever see meaningful privacy legislation or regulation of big internet platforms—despite many reasons to be skeptical of how they use such data, not to mention the likelihood that Facebook, Google, and others’ data-targeting tools were manipulated by Russian government–backed forces in an effort to foment divisiveness and help Trump win the election.
This is similar to the cycle of campaign finance reform. Candidates on both sides of the aisle agree campaign finance laws need to be reformed, especially since the Citizens United decision in 2010 that ruled campaign contributions were a form of “political speech.” That ruling removed limits on the amount of money corporations and wealthy donors could shell out in support of a candidate and opened the floodgates to a surge of unaccounted for money in campaigns, mostly from super PACs, which are committees that are allowed to spend as much as they want in support of a particular candidate as long as they don’t work with that politician directly. Campaign finance reform, however, is notoriously difficult to enact because once the money starts to flow into a campaign, it’s very hard to convince beneficiaries to shut the faucet.
Privacy law and regulation of internet platforms are in a similar bind. The U.S. hasn’t substantially updated federal digital privacy law in 31 years, since the passage of the Electronic Communications Privacy Act of 1986. The law holds, for example, that a warrant is not required to access emails stored online for more than 180 days, which means that if you keep your emails stored in webmail for years, the government can grab them without a warrant. In other words, police have to satisfy a much lower standard of reasoning to read your emails than if you printed them out and stored them in a desk drawer. The stagnation of digital privacy law isn’t due to a shortage of evidence showing online profiling can put users at risk.
Take what happened a decade ago when mortgage brokers were peddling dangerous subprime loans. During that time, financial companies were leveraging online behavior data with location and demographic data in order to deduce a person’s race. That in turn helped mortgage companies target minorities in their online marketing of bad financial products, which were sometimes referred to as “ghetto loans,” according to research from Seeta Peña Gangadharan, a professor of media and communications at the London School of Economics. Subprime lenders made the list of the top 10 spenders in online advertising in the U.S. in 2007 and 2008, according to Nielson/NetRatings data. Experian, for example spent $54 million in online advertising in 2008. The year before, in 2007, Gangadharan found Countrywide spent nearly $35 million in online advertising and Experian spent $43 million. Black wealth dropped by nearly 50 percent during the crisis, and black and Latino homeowners were about 70 percent more likely to lose their homes to foreclosure than whites.
Now that there’s strong evidence that these data-targeting systems were instrumentalized in an attempt by Kremlin-backed groups to manipulate voters and spread misinformation online in the run-up to the 2016 election, one would think that this would be the opportune time to start to regulate how personal data is collected and used to target individuals in harmful ways.
Looking at only six of the 470 accounts Facebook shared with Congress with alleged ties to a Kremlin-backed group that bought ads on the social network around the presidential election, Jonathan Albright, research director of the Tow Center for Digital Journalism at Columbia University, found that content from that small handful of accounts alone had been shared about 340 million times. Journalists have also found that both Facebook and Google allowed ad buyers to target people based on extremely bigoted categories. ProPublica found, for example, that Facebook let advertisers target people explicitly interested in topics like “how to burn jews” or “jew hater.” Slate likewise was able to target ads on Facebook to users interested in “kill Muslimic Radicals” or “threesome rape.” BuzzFeed similarly was able to buy ads on Google targeted at the keyword phrase “blacks ruin everything” and other racist slurs.
While the impacts of these corporate online ad-targeting practices in connection to the 2016 election are still being assessed, and Congress has even scheduled a public hearing to which it has invited Facebook, Google, and Twitter to testify on how their platforms were used by foreign entities to manipulate voters, it’s unlikely that any kind of meaningful regulation of these platforms or new data privacy laws will result.
“The RNC and DNC have databases that have over 900 points of data on every member of the electorate,” says Daniel Kreiss, a journalism professor at the University of North Carolina. “The core of that is that public data built around commercial marketing data about things like credit card purchases and grocery card purchases and magazine subscription lists.” That information is then bundled with data about party affiliation and voter turnout to target people on websites like Facebook, Kreiss explained.
There are, of course, many other reasons why Congress is unlikely to regulate online platforms beyond the fact that politicians themselves use data-driven online ads. Google is on track to be the largest corporate spender on lobbying in the U.S. this year, and executives from major tech companies are often massive donors to electoral campaigns. But still, Trump’s campaign spent $85 million on Facebook alone, according to Hong, who also wrote Trump’s Facebook posts. Republicans and Democrats alike are only poised to increase their spending on data-driven online voter targeting in future elections. And that means that no matter how pernicious the effects of microtargeting ads are, a practice that’s only possible through vast corporate data collection, the chances of Congress taking substantive action to abate those harms is slim.