Slate’s April Glaser on Cambridge Analytica, Facebook, and tech reporting.

The Cambridge Analytica Mess Reveals That Even Careful Policies Can’t Protect Our Data

The Cambridge Analytica Mess Reveals That Even Careful Policies Can’t Protect Our Data

Slate’s Inner Workings.
March 21 2018 8:00 AM

The Cambridge Analytica Mess Reveals That Even Careful Policies Can’t Protect Our Data

Slate’s April Glaser on the data-sharing scandal and why shutting down Facebook won’t solve all our problems.


Photo illustration by Slate. Image by Facebook.

Late last week, news broke that 50 million Facebook users had unknowingly had their data collected by a political-data firm named Cambridge Analytica, which had worked with the Trump campaign in 2016. The impact of this scandal is yet unknown but could be extensive, and it reveals further that this sort of data-sharing is more prevalent than we think—and also permissible per Facebook’s policies.

In this S+ Extra podcast—which is exclusive to Slate Plus members—Chau Tu talks with Slate tech writer April Glaser about the scandal, the incalculable influence of social media, and the difficulties of the tech reporting beat.


* * *

This transcript has been edited and condensed for clarity.

Chau Tu: Let’s start with the big news that broke late last week, this Cambridge Analytica scandal. Can you explain about what happened here?

April Glaser: Sure. Cambridge Analytica is the data analytics firm that was hired by the Trump campaign to often target their Facebook posts and ads to potential voters in the run-up to the 2016 election. Now, this firm also worked for Ted Cruz before he pulled out. It also worked on the Brexit campaign in the U.K. for the leave side. This firm actually was using data to target people that it obtained through somewhat illegitimate ways. Essentially they had hired a professor who made a personality quiz on Facebook, and that personality quiz, the people who took it, which was about 270,000 people, also consented at the time to allow the quiz to scrape data from their profiles and the profiles of their friends. So that 270,000 people expanded to tens of millions of people. That data was then siphoned over, or handed over, to Cambridge Analytica, which then ostensibly used it target people for presidential campaigns, whether it was Ted Cruz’s at first and then Trump’s ultimately successful campaign later on.


Now the thing is that that data was actually removed legally because it was allowed in Facebook’s terms of service for people who make apps on Facebook to scrape data on users, as wells as user’s friends at the time, but Cambridge Analytica was not supposed to get it because the person who made the quiz wasn’t supposed to hand it over. He was supposed to keep it for research purposes. So now about 50 million people, the New York Times reported, had their data illegitimately taken and then used to target them with emotionally targeted ads based on the really fine-grain details of their social media life for the run-up to the presidential election.

One of your last points there was that this isn’t really technically a breach, right?

Yeah. So, Facebook is pushing back really, really hard on the idea that this was a breach because if they say it was a breach, that would have meant their security systems would have been violated, that there would be some hole or exploit that was pounced on and data taken through that way. It’s that their systems weren’t breached. Rather, the data was taken through the front door. Facebook actually allowed developers to take this data. They just weren’t supposed to be handed over to anyone else.

But there is no telling what other app developers were scraping data off Facebook and then siphoning that to other firms at the time. Facebook changed its policies to be less permissive in 2015, and this quiz was posted in 2014.


Is there anything that Facebook users can do if they’re concerned about their data, at least in this specific case?

Well, right now it’s not clear whose data was siphoned up. We’re talking about 50 million people, right? To put that in perspective, about 200 million people voted this year, so that’s a lot. Really what they can do is go into their privacy settings and just to make sure that it’s checked that they are not OK with developers sharing data about them or their friends with third parties. So make sure that that’s checked.

If this happened, as it did in 2014, and you were a part of that dragnet collection, then right now there’s really nothing that can be done.

As someone who’s covering the tech industry, what are your thoughts about how much power these companies seem to yield these days? Should we regular consumers be worried about our data or anything else that’s going on?


We should certainly be worried. I mean, what we’re talking about it—if Cambridge Analytica did what they say they were able to do and what people on the Trump campaign have gone on the record saying Cambridge Analytica was capable of doing—we’re talking about ads targeted to your emotional state. One of Trump’s people led his digital arm of his campaign and worked closely with Cambridge Analytica, and wrote his Facebook posts. She told the BBC last year that, for instance, if you are a mother that is a working mother and concerned with childcare, they would send you an ad that didn’t have any militaristic vibe, didn’t have Trump’s voice, had a more warm and cozy feel. They were really targeting people based on nuances of their personality that they have that ends up being shared as we surf the net or we like things on Facebook. And that can be a very manipulative amount of information that determines what we know and what we don’t know, what we see in our news feed, what ads are given to us, and possibly even how we decide to vote.

Yeah. You wrote recently about how there’s a lot of pressure to get the tech CEOs, like [Facebook’s] Mark Zuckerberg and Twitter’s Jack Dorsey to testify in front of Congress about whether or not these companies could have taken any precautions or identified the Russian influence much sooner than they did. Can you tell us a little bit more about that, what’s going on there?

Right. After news broke Friday night about this whole Cambridge Analytica, Facebook, 50 million users having their data illegitimately used mess, the calls for the CEOs of these tech companies that are supposed to have stewards of our data, that hold our data, were amplified for them to actually come up and testify in front of Congress. So now, [Sen.] Amy Klobuchar has said, and she’s said this before, “It’s time for Mark Zuckerberg to come and testify in front of Congress.” We’ve seen other Republican senators, like Sen. Kennedy say, “It’s time for Zuckerberg to testify in front of Congress.” [Sen.] Collins of Maine likewise has said that some chief executive or high executive needs to come and testify. We’re also seeing calls from the U.K.

So basically the idea now is that they can’t send lawyers to bob and weave and make excuses about what happened and give legalese. They want people who are actually responsible for the goings on of these companies to talk about what went wrong, what they could have done, and what they’re going to do to make sure this doesn’t happen again.


How likely is that going to happen?

As pressure mounts and more stories like Russian operatives manipulating these platforms begin to surface—and that’s pretty much been a central part of the news for over a year now—as stories about campaign-hired data analytics firms using data in extremely inappropriate and manipulative, potentially manipulative ways continues to surface, there’s going to be more calls for people who depend on fair elections with some level of integrity, like politicians, elected officials in Congress. They’re going to be calling for these tech CEOs to really stand up and explain themselves. You have to realize that these members of Congress depend on elections working with some level of integrity, too. So, they’re very interested in getting some accountability here.

To switch gears a little bit, you and another Slate tech writer, Will Oremus, recently started a Slate podcast called If Then. Tell us a little bit more about it: What’s the goal and who are you guys talking to?

Our podcast is a lot of fun. It’s called If Then, it’s once a week and we drop it on Wednesdays. It is about the most powerful industry in the world, which is the technology industry, and the politics, and the culture, and everything that weaves through it. Whether it’s new teen YouTube stars doing horrific things to get likes and views, or it’s Russia meddling with the elections through social media feeds or net neutrality, which is something that I write about a lot. We cover it every week in a 20–30 minute digest of the news and discussion of the news, as well as an interview with someone.

Last week, we actually interviewed the head of Facebook news feed, Adam Mosseri, and he of course is a very powerful dude because he tweaks the knobs on how most people get—or how many, many people rather, over 40 percent of people, the primary way of getting news and information, which is through Facebook.

Right. What was his take on everything?

He basically says it’s a hard job because he’s got a lot to do. We specifically asked him, though, about what’s going on in Myanmar right now. At the beginning of last week, the U.N. issued a report saying that Facebook is heavily implicated in helping to amplify and share posts that are stoking violence and ethnic hatred throughout the region. The Rohingya, which is a Muslim ethnic group in Myanmar, are being displaced. About 700,000 have fled to Bangladesh. The U.N. has said that it’s approaching a situation that could be called genocide and this is state-sanctioned violence, military-sanctioned violence. A lot of the hate speech and the sentiment that is being used to justify this throughout the country is being circulated on Facebook. The U.N. said that Facebook has become just a place ripe with hate speech, that it’s fueling a lot of the justification of the terrible, terrible atrocities that are happening there.

We asked the head of Facebook’s news feed, Adam, what he thought about this. He said that this is something that’s keeping them up at night, but they’re trying really hard to figure it out. Clearly they need to figure it out quickly because there’s very dangerous information being spread rapid-fire and it’s affecting what people know, and don’t know, and amplifying hatred in the region.

Right. The next question there is basically why not shut down Facebook in that area, right?

Yeah. After this interview, one of our listeners, Kevin Roose—he’s a fantastic columnist at the New York Times—posed the question on Twitter, “If Facebook is so awful there, why not just shut it down? It’s not like Facebook is probably making a ton of money there, compared to where it is in other countries.”

I thought it was an interesting question: Why not have Facebook just pull out entirely? So, I did my reporter thing and hopped on the phone with like seven or eight, or I don’t know nine people, to figure out what’s going on there, because I’m not an expert in the area at all. And called a ton of people who have boots on the ground, as well as people who have spent a lot of time there and have studied the region and just really know a more nuanced perspective, both on the ethnic hatred that’s happened there, as well as the information spread.

And nobody I spoke to said that shutting Facebook down would make any sense because to do that would kind of be assuming that people use Facebook only to spread hate speech, they said. Facebook is used for online shopping and wishing people a happy birthday, and to get real news, too. Newspapers depend on Facebook. Journalists depend on it. Activists depend on it.

What they said Facebook could do better though is have more region-based local policies in place. For instance, if there were people there who understood the context of hate speech in Myanmar, in the Burmese culture, against the Rohingya, then they would be taking down posts that are flagged much faster, instead of putting them through a complicated review process that allows them to be up for two days and still get hundreds of thousands or millions of views. But Facebook doesn’t have an office there, for example. There’s no people there that seem to really have that deep context, or if they are there, they’re not really working fast enough and there’s not enough of them.

As someone who’s been writing about the tech industry, do you feel like there are any topics that aren’t being written about that you wish there was more of a spotlight on?

I feel like any journalist that doesn’t have 30 stories that she is late on, that she’s desperately trying to write, is probably not doing her job. Yeah, I have a huge letter bag to myself about what else to be written about.

I would say that the social media is covered really, really, really extensively. I feel like what’s not covered enough now and something I wish I had more time to cover is how people who are maybe in poverty or aren’t making as much money, aren’t as wealthy in the country, grapple with technology. And how technology is used and automation is used to disenfranchise people. A lot of people experience computers and data and surveillance by going to apply for food stamps, for instance, and being asked a ton of questions about their personal lives in order to justify getting this help. And they’re on the other side of the monitor as people are entering in data about them.

I feel like we could, as a community of tech journalists, do a lot better job on reporting on that and the perspective of people who may be disenfranchised by technological systems.

Yeah, because technology, I mean it’s so expansive, but I’m sure there’s still people that don’t have access to a lot of things that most others do.

Yeah. Internet broadband penetration in the U.S. is, I believe, just over 70 percent. That might be a figure that’s old, but I’m pretty sure it’s still there. In other words, there are still a ton of people though, however you square it, that don’t have a reliable connection to the internet. That’s an equity problem.

Yeah, that’s huge. What do you think are some of the biggest obstacles that you face as a tech writer?

Well, lately there seems to be this endless spree of wildfires spreading throughout some of the world’s most powerful companies, [so] they’re really clamping down on what they share or don’t share with journalists. As these companies get more powerful, like Facebook, Amazon, Google—Alphabet, rather, which is the parent company of Google—they’re also controlling what information gets out there about them more. So as we continue to chronicle their rise, it’s becoming more difficult to cover them. Yet we continue because they often repeat their problems over and over and over again.

I think YouTube is a good example of that, how it seems like after every serious crisis or breaking news moment, they manage to surface some of the worst information available on their website for people who are looking for news. So we continue to report on their bad job of curating information that people are looking for. Still, they’re closing the lid tight as we continue to push. It’s not easy to report on these large companies.

Do you foresee some of these bigger companies breaking up in any way? They have become monopolies, or getting closer to that now.

Yeah. It definitely could be argued that Alphabet, which again is the parent company of Google, is bigger than the Bell Company, than Ma Bell was before it was broken up. It’s a major question right now that is coming up on both sides of the aisle about whether or not these tech firms have gotten too powerful and have too much control over what information people receive, over the economy, where people shop. They just really funnel so much of our basic activity, whether it’s in the marketplace or our communication.

There’s questions about not necessarily breaking them up, because that would be difficult legally, but at least regulating them more. At least putting a tighter rein on how they peddle user data. After all, Facebook and Google are essentially advertising companies. We’re talking about two of the biggest companies in the world for ads.

Right, yeah, that are personally directed toward us.

Yeah. That are personally targeted toward you. So, depending on how you’re feeling or what you’re curious about, they know that because of what you searched and where your eyes landed, or any of these things that they may be watching. I mean, there are biometric technologies that could see on your phone where your eye is on the page. This is incredibly pervasive data tracking that’s used to tailor an advertising environment to us in really specific ways and really manipulative ways. There’s definitely regulators looking at that from the FTC and members of Congress.

As a tech writer, do you feel particularly optimistic about the future of technology?

Well the future of technology is a big thing. I don’t right now. Right now, I see a lot of the prejudices and problems that are happening offline be exacerbated and perpetuated online. We see this a lot in A.I., for example. ProPublica did this story about software that’s used to help to suggest sentencing in court cases, that is using a dataset to inform its decisions, and it disproportionately would recommend harsher sentencing for people of color. And that’s probably because the datasets that it used was pulling on a history of racism in the criminal justice system in the United States. So, as our technologies develop and build on a racist history, that history often deeply permeates the future.

It’s a difficult question. I can talk about what I think people should be concerned about. There’s also ways that bring us together. I live very far away from my father, and we’re very, very close. I can talk to him through FaceTime, through video on my phone. It’s incredible. I don’t know what kind of relationship we would have or I know it would be very, very difficult to have the same kind of relationship we do now if it wasn’t for this seamless communication that’s at our fingertips. It’s changing our lives, making us closer to other people, and also injecting some negative consequences, too.