Future Tense
The Citizen's Guide to the Future

Nov. 16 2016 2:10 PM

Future Tense Newsletter: A Moment of Reckoning for Facebook

Greetings, Future Tensers,

Are you still scratching your head wondering how Donald Trump was elected? Lots of people are blaming Facebook—particularly fake news on the site. This week Will Oremus explored allegations that Facebook tolerated fake news in response to accusations of liberal bias. But the real problem, Oremus wrote, is CEO Mark Zuckerberg’s denial of Facebook’s role in spreading misinformation. Ultimately, we’ll probably never know for sure whether Facebook is to blame for Trump’s election but, as Jefferson Pooley explains, Facebook might be to blame for that, too.

In another Facebook fail, Mark Joseph Stern writes that he posted on the site a death threat he had received, to show people what he and others are experiencing—only for Facebook to take down it down.

In better social media news, Twitter has released a timely new feature to fight harassment. It’s not perfect, says Kate Klonick, but it’s progress. If only there were a tool to prevent sexual harassment on virtual reality platforms. (Or perhaps we could try some human decency?)

If you would like to get your mind off the election, I suggest you catch up on our November Futurography course: “Who Controls the Internet?” This past week, Hao Wu wrote about the live-streaming revolution in China, and Charles Kenney explained that the main concern of most of the world’s internet users is not, in fact, net neutrality.

The stories we read this week while getting schooled on social media include:

  • Standing Rock: Are you wondering what the AT&T–Time Warner merger has to do with the protests at Standing Rock? You should—and New America’s Greta Byrum has your answer.
  • The philosopher: Gottfried Wilhelm Leibniz’s way of thinking paved the way for the information age that blossomed 250 years after his death.
  • Civilization VI: Jacob Brogan explores how the blockbuster video game franchise got the science right in its latest installment.

Events:

  • Join David Biello, author of The Unnatural World: The Race to Remake Civilization in Earth’s Newest Age, for a happy hour event in Washington, D.C., tonight—yes, Wednesday night—to discuss his book about the men and women around the world working to create a better future in the face of climate change. RSVP here. (And you can read an excerpt from his book here.)
  • On Wednesday, Nov. 30, Future Tense is convening experts in Washington, D.C., to consider how new technologies should be deployed to prevent crime, protect our rights, and improve the relationship between law enforcement and the communities they are meant to protect and serve. RSVP to join the conversation in person or online here.
  • RESCHEDULED: Will the internet always be American? On Tuesday, Jan. 24, Future Tense will host a live event in Washington, D.C., to explore the internet’s nationality, and the extent to which it’s an expression of American culture, and how that may be changing. You can RSVP to attend in person or watch online here.

Farming the ocean,
Emily Fritcke
for Future Tense

Nov. 15 2016 4:28 PM

Twitter Releases New Features to Fight Harassment—Just in Time

On Tuesday, Twitter announced a new feature in its ongoing fight against harassment and hate speech: an expansion of user’s ability to “mute.” Mute is not an entirely new tool—you’ve long been able to mute accounts you don’t want to see—but this expansion allows users to mute key words, phrases, or conversations. It’s a more fine-grained implement and one that many people have urged Twitter to adopt to stem harassment over the course of the year.

In a conversation just last month, Renee Bracey Sherman, an abortion activist and frequent victim of abuse on Twitter, and Anita Sarkeesian, founder of Feminist Frequency, a nonprofit organization that looks at pop culture from a feminist perspective, both listed expanding the mute feature as an easy step Twitter could take to stem the tide of harassment—and they noted that other platforms already were using that approach.

“On Facebook, I have in my settings a list of words that are banned from my feed,” Bracey Sherman told me. “So I can block ‘baby killer,’ or ‘you’re a murderer,’ and the post automatically won’t show up.” In the vacuum of Twitter’s action, Bracey Sherman was using outside apps like Block Together to manage her notifications and comments on Twitter.

Sarkeesian and others have also emphasized that better training moderators could help Twitter tackle prioritize abuse and harassment on the platform—something that is also specifically mentioned in Twitter’s press announcement: “We’ve retrained all of our support teams on our policies, including special sessions on cultural and historical contextualization of hateful conduct, and implemented an ongoing refresher program,” the company stated in its blog post.

That Twitter users who frequently faced abuse had resorted to using third-party applications or other social media sites to was not a good sign for the platform, which has historically come under fire for being at first totally resistant to policing hate speech, and then too slow and ineffective in putting systems in place to deal with it.

While these new features are a lot better at dealing with harassment, they’re not perfect yet. Many of the signature elements of Twitter—anonymity and rapid-fire comment threads—are the very things that make it so prone to abuse and harassment. They also contribute to the rise of mobs of trolls who abuse users on masse. Sarkeesian, who consults with Twitter as part of the company’s new Trust and Safety Council, described how one harasser might tweet at her in a way that was completely within the bounds of the terms of service, but trigger an entire army of harassers who flood her feed. This could be resolved, she suggested, by not allowing a blocked person to link to their blocker in a tweet—a simple fix that still appears to be unaddressed in the most recent Twitter announcement.

Still, this is progress—and just in the nick of time. The election of Donald Trump as United States’ president has led to a rise in hate speech and hate crime both online and off. And while Trump has stated he has no control over the actions of his followers, Twitter’s new policies seem to finally give better control to users and their own content moderators.

Nov. 14 2016 2:36 PM

Did Facebook Really Tolerate Fake News to Appease Conservatives?

Facebook has spent the past week denying that fake pro-Trump news on its platform played a role in the outcome of the U.S. election. On Monday, Gizmodo published a report that, if true, would severely puncture Facebook’s credibility. The tech site reports:

According to two sources with direct knowledge of the company’s decision-making, Facebook executives conducted a wide-ranging review of products and policies earlier this year, with the goal of eliminating any appearance of political bias. One source said high-ranking officials were briefed on a planned News Feed update that would have identified fake or hoax news stories, but disproportionately impacted right-wing news sites by downgrading or removing that content from people’s feeds. According to the source, the update was shelved and never released to the public. It’s unclear if the update had other deficiencies that caused it to be scrubbed.

Facebook was quick to dispute the report. A spokesperson emailed this statement:

The article’s allegation is not true. We did not build and withhold any News Feed changes based on their potential impact on any one political party. We always work to make News Feed more meaningful and informative, and that includes examining the quality and accuracy of items shared, such as clickbait, spam, and hoaxes. Mark himself said,“I want to do everything I can to make sure our teams uphold the integrity of our products.” This includes continuously review updates (sic) to make sure we are not exhibiting unconscious bias.

So who’s telling the truth here? Without knowing for sure, I can offer a few insights that might be helpful in assessing the competing claims.

First of all, any report that relies so heavily on a single anonymous source should be regarded warily. Yes, there are two anonymous sources for the first major claim in the Gizmodo post, which is that Facebook undertook a review aimed at eliminating the appearance of bias. But that part is not particularly controversial, even if it’s true. Facebook itself said earlier this year it would begin to train employees on unconscious political bias, among other unconscious biases. That’s just one of several moves it made in response to charges of liberal bias in its trending news section.

It’s the second, single-source claim—that Facebook buried a fake-news fix to avoid upsetting conservatives—that’s both incendiary and a little shaky. As I’ve written before, Facebook is constantly considering, testing, and evaluating potential changes to the news feed, for all sorts of reasons. It announced several such changes earlier this year, including one that employed machine-learning software to combat clickbait. It’s certainly possible that the company considered another tweak that would have tackled fake news separately, or through a different mechanism.

In fact, a company spokesperson told me that the news feed team had built two different options for a ranking update to address clickbait this past summer. The first relied on user reporting and behavior to identify stories as clickbait and limit their distribution, much as the company already does with hoaxes. The second approach was the machine-learning classifier. The spokesperson told me the machine-learning system performed better, so the company ended up shipping that one.

The spokesperson did not elaborate on the exact criteria by which it performed better. Did the user-reporting system fail to reduce clickbait and other misleading content? Or was the problem that it resulted in overreporting, or incentivized users to report legitimate stories that happened not to align with their views?

It is true, generally speaking, that the company’s decisions about news feed changes are guided by large amounts of both behavioral and survey data on how they impact user engagement and satisfaction. It has shown itself willing over the years to endure major PR backlashes in its quest to optimize for those goals. It would thus be out of character for Facebook to scrap a proposal that performs well on its user metrics out of concern for its political ramifications. If even Gizmodo’s anonymous source couldn’t say whether politics played a role in the decision to go easy on fake news, the implication should be treated with due skepticism.

Still, that doesn’t mean it’s impossible that Facebook considered the political fallout of various approaches to fake news. And it becomes a little more plausible when you consider the degree to which Facebook appears to have freaked out over charges of liberal bias in its trending news section earlier this year. (Symmetry alert: Those charges were themselves lodged most vocally by a single anonymous former contractor in a May 9 Gizmodo post by the same author, Michael F. Nunez.) As that tempest unfolded, Facebook backpedaled furiously, issued a rare apology, held a summit with conservative leaders, and ultimately reworked the entire trending news feature. It laid off the whole team of journalists who were editing the section, dispensed with headlines and context, and further automated the process of story selection and source verification. It did this, as I reported, even though it was clear to those involved that this would result in an inferior news product, including fake news stories and the inclusion of topics that weren’t news at all.

Facebook’s trending section was always a sideshow. The news feed is its core product, and its algorithm is the company’s most precious asset. The company’s willingness to defile its own trending news section to appease conservatives doesn’t necessarily mean it would do the same to the news feed. That said, it does lend the latest charges of political cowardice a little more fuel than they might have had otherwise.

So here, with the Gizmodo piece, we have an epistemologically fascinating case in which Facebook is claiming that a news story about its efforts to crack down on false news stories is, in itself, a false news story. If nothing else, it provides a vivid example to support CEO Mark Zuckerberg’s point that appointing Facebook the arbiter of truth in journalism might come with some pitfalls of its own.

Nov. 14 2016 10:51 AM

Why Wouldn’t Facebook Let Me Post Death Threats I Receive From Trump Supporters?

On Sunday morning, I awoke to an email from a Donald Trump supporter that contained a threat to my life. I have received such threats before, and will surely continue to get them. But this one was especially graphic and specific, and has required me to consult with law enforcement—a nightmare scenario that was unimaginable before Trump and his ghastly army of piteous tormenters came on the scene. To illustrate the gruesome hate that now regularly pours into my inbox, I posted the email on Facebook and made the image public. Hundreds of people shared it as an example of the bigotry that Trump has unleashed upon people like me.

Then, a few hours later, Facebook removed the image, informing me that it violated “Community Standards.” It also removed altogether a long post that a close friend had written about the email and the threats to my life, which included an image of the threat. I promptly reposted the email along with Facebook’s warning. Naturally, Facebook removed that post as well. It then required me to “Review the Community Standards” before continuing to use the platform.

I have reached out to Facebook’s press office on Sunday night but have received no reply. Slate’s social media director is in contact with Facebook to restore the images. But that is really no solution at all, because it is not only journalists with access to Facebook HQ who are receiving these threats. Myriad othersare being subjected to vicious harassment in the wake of Trump’s triumph. His cruelest supporters feel empowered and emboldened; they are eager to effectuate their vision of America, which has no room for minorities. They will attempt to accomplish this by terrifying minorities into silent submission through a barrage of threats.

Apparently, if the victims of this intimidation attempt to illustrate their torment by posting proof of it on Facebook, the platform will silence them. Or, more specifically, the platform will allow Trump supporters to silence them: Facebook posts are typically removed after users flag them for violating community standards. As my experience proves, Trump supporters can easily flag images of threats, alleging that they violate these standards, and Facebook will censor them. Through this process, Facebook becomes complicit in the silencing of minority voices that are attempting to warn against the rising tide of violent bigotry.

Nov. 11 2016 1:02 PM

Why We Can’t Know Whether Facebook Is to Blame for Trump’s Election

Who’s responsible for Donald Trump’s victory? In a thoughtful election postmortem, Neiman Journalism Lab’s Joshua Benton proposed an answer. “There’s plenty of blame to go around,” he wrote, “but the list of actors has to start with Facebook.” Benton’s complaint is that the site’s indifference to truth, together with its algorithmic sorting, have created self-contained cesspools of mendacity. He pointed to the feed of his Louisiana hometown, where stories like “FBI Agent Who Was Suspected of Leaking Hillary’s Corruption is Dead” were posted by the mayor and showered with likes. Fake news, Benton wrote, has “weaponized” Facebook’s filter bubbles.

Benton and many others invoke the irony that a California company staffed by doe-eyed liberals and officially committed to giving “people the power to share and make the world more open and connected” helped to elect a proto-fascist. There was some pushback to this thesis, though, including a smart if strawman-toppling piece from political media scholar Daniel Kreiss and a rant from Recode.

Missing from the debate is any real evidence one way or another—and we probably won’t get it any time soon. That’s Facebook’s fault. For years now the social-media giant has selectively published self-exculpatory research papers authored by its own employees while refusing to give independent researchers the data to perform their own analyses. For instance, in June 2015, three Facebookers published an academic article in the venerable journal Science on the idea of the filter bubble. What they found confirmed the obvious: People tend to selectively consume media that affirms their existing beliefs—a staple of media research for the last 75 years. They also reported that the News Feed algorithm itself winnows out some of the ideological diversity before it ever reaches Facebook users—the filter bubble in practice, in other words. On the odd grounds that the former factor (users’ selective choices) has more impact than the latter (Facebook’s algorithm), they exonerate their employer: “our work suggests that the power to expose oneself to perspectives from the other side in social media lies first and foremost with individuals.” Basically: It’s not Facebook’s fault. They concluded that “individuals are exposed to more cross-cutting discourse in social media than they would be under the digital reality envisioned by some.”

A number of scholars called out the paper’s many flaws. The self-serving conclusion, especially given its dubious logic, led one critic to compare the study to tobacco-industry misdirection. The most troubling angle, though, went mostly unmentioned: Facebook restricts data access—meaningful data access—to its own researchers. As David Auerbach noted on Slate at the time, “no one outside Facebook can do this research.” Flawed papers get published all the time, and the system is designed to encourage criticism and replication. Without access to the data, however, follow-on studies just weren’t possible (or else required workaround proxies like web-browser historythird-party Facebook apps, or the company’s notoriously restrictive API). So Facebook gets to launder its case through scientific journals without real scrutiny from, or further research by, anyone who isn’t employed there.

Remember the furor over Facebook’s 2014 emotional contagion study, in which the emotional content of 700,000 News Feeds was tweaked? The company came under major fire for the study’s consent-free “manipulation” of its users. As with the 2015 filter-bubble study, independent researchers exposed the study’s many other weaknesses. But the paper’s biggest flaw isn’t the shoddy ethics or its poor design. It’s the fact that the study absolves Facebook from another major line of criticism—that the site makes us depressed since we compare our lives to the highlight-reel posts of others. No, upbeat posts on Facebook don’t make us sad, the researchers conclude; in fact, they make us happier. This finding, the authors explain, “stands in contrast to theories that suggest viewing positive posts by friends on Facebook may somehow affect us negatively, for example, via social comparison.” You should be thanking Facebook for your good mood.

And so it goes. Late last month, the company’s in-house researchers (with university-based co-authors) published yet another Facebook-affirming study, this one using propriety data to show that Facebook users live longer than social-network abstainers.

Perhaps the critics blaming Facebook for Trump’s shocking victory are wrong. CEO Mark Zuckerberg dismissed the claim as a “pretty crazy idea.” The problem is that we won’t ever know, unless Facebook grants access to impartial researchers to study the question, with the freedom to publish their results regardless of whether the findings reflect well on Facebook. We certainly can’t rely on the company’s own researchers—who have a paycheck at stake and a history of published apologia —to settle the issue. If the fake-news-and-algorithms stew really did sway the electorate, it’s doubtful the problem will solve itself by 2020.

Maybe Facebook really does make us happier and healthier. Maybe there’s no big problem with algorithmic echo chambers, and maybe Trump’s rise has nothing to do with the site. That’s the takeaway from the company’s sponsored research. No one else has comparable access to the data, so it’s hard to know. You might even say the system is rigged.

Nov. 11 2016 11:33 AM

Facebook Bans Targeting Based on Race and Ethnicity for Housing and Employment Ads

Facebook announced Friday that it will no longer allow its advertisers to target audiences by excluding certain races and minorities on certain ads.

“We are going to turn off, actually prohibit, the use of ethnic affinity marketing for ads that we identify as offering housing, employment and credit,” Erin Egan, Facebook’s vice president of U.S. public policy, told USA Today.

According to a blog post by Egan, Facebook will also provide educational materials to advertisers about their legal obligations and will require them to affirm that they will not purchase discriminatory ads on Facebook.

 

Nov. 10 2016 10:31 PM

Mark Zuckerberg Says Fake News on Facebook Had “No Impact” on the Election

Facebook CEO Mark Zuckerberg on Thursday defended the social network's role in the U.S. presidential election. False news stories that were shared hundreds of thousands of times on the network, including claims that the Pope had endorsed Donald Trump and that Hillary Clinton would be arrested on charges related to her private email server, "surely had no impact" on the election, he said, speaking at the Techonomy conference.

"Voters make decisions based on their lived experience," Zuckerberg went on. The notion that fake news stories on Facebook "influenced the election in any way," he added, "is a pretty crazy idea."

In an extended on-stage interview with David Kirkpatrick, author of The Facebook Effect, Zuckerberg noted that fabricated stories made up a small fraction of all the content shared on Facebook. And he suggested that the criticism Facebook has received for fueling such falsehoods was rooted in condescension on the part of people who failed to understand Donald Trump's appeal. "I think there is a certain profound lack of empathy in asserting that the only reason someone could have voted the way they did is because they saw fake news,” Zuckerberg said. “If you believe that, then I don’t think you internalized the message that Trump voters are trying to send in this election."

Here Kirkpatrick broke in to ask Zuckerberg what that message was. Zuckerberg demurred, suggesting he'd return to that question after he'd finished his thought. He did not.

Zuckerberg suggested that the clincher to his argument was that, to the extent fake news was shared, it must have been shared by Clinton supporters as well as those who backed Trump. "Why would you think there would be fake news on one side and not the other?"

In fact, fake news was shared by both sides, but a BuzzFeed analysis of 1,137 posts by six significant "hyperpartisan" news sources—three conservative and three liberal—found that mostly or partly false stories on the right outnumbered those on the left by a ratio of two to one. BuzzFeed separately reported on a cottage fake-news industry that had sprung up in Macedonia largely around pro-Trump and anti-Clinton content. People who produced the bogus stories said they had tried pro-Clinton content but found that it was less likely to go viral.

Zuckerberg also took a question about whether Facebook might be contributing to the country's political division by insulating its users in "filter bubbles"—communities of like-minded people who reinforce one another's biases rather than challenging them. There, too, Zuckerberg found the criticism misplaced. “All the research we have suggests that this isn’t really a problem,” he said. “For whatever reason, we’ve had a really time getting that out.” He cited a Facebook-funded 2015 study that concluded that while Facebook’s news feed does tend to show people information that supports their political views, their own choices about what to read play a greater role. That study was itself criticized by some for soft-pedaling its findings. Social media researcher and writer Zeynep Tufecki rebutted it in some depth here.

Zuckerberg noted that Facebook takes fake news and hoaxes seriously and provides users tools to report them. Despite his view that they played no role in the election, he said Facebook would continue to work to address the problem. He also said Facebook will continue to explore ways to expose users to a diversity of views in their news feeds.

Nov. 9 2016 4:26 PM

My Bitmoji Looks as Stricken by Trump’s Victory as I Feel

I haven’t used Bitmoji since the election, in part because the cute little cartoon seems incongruous with most of the things I’m feeling and writing, and in part because I want, for as long as possible, to shelter my emotionally effusive avatar from the fact that Trump is president. She’s excitable—fuchsia hearts leap out of her eyes when she’s happy. She would not take the news of apocalypse well. I did accidentally open up my Bitmoji keyboard, though, last night, while pawing blearily at my phone in some purple predawn hour. (Bitmoji are the customizable drawings you can download to send short messages over text.) My cartoon representative appeared in her capris and gray scarf, losing her shit just like I was. “I can’t even,” she moaned. “Nooooooo,” she cried. “What happened?” “What the ?!” “I cannot.” “Woe is me.” “No way.” “Brutal.”

FT-161109-bitmoji2

I took a closer look at my phone. It wasn’t displaying the menu of negative messages that live under Bitmoji’s frowny-face symbol. Instead, I’d pulled up the default tab, the one in which the app recommends pictograms based on what’s happening in the world.

(On a normal morning, for instance, my avatar likes to pose with a steaming cup of coffee. When the new X-Men movie came out, she dressed up as Magneto. Those were innocent times.)

But in the early hours of Nov. 9, mini-me was despairing over the state of the country. A small number of happy or neutral messages were interspersed with the gloom-and-doom: “Mission Accomplished,” “Happy Day,” “I voted.” But they were few and far between—and I could see in my avatar’s eyes that she didn’t mean it.

An informal office survey this afternoon reveals that all of our Bitmoji continue to freak out, not just mine. Apparently the little guys skew Democratic. So here we are: commiserating with digital projections of ourselves about a sewage monster who wants to blast American society at least 100 years back in time. If technology represents the future, the future rejects Trump, and implores us to come along. I just wish the Bitmoji coalition had turned out at the polls.

Nov. 9 2016 3:30 PM

Future Tense Newsletter: Can We Trust Trump With the NSA?

Hello, Future Tensers,

We at Future Tense try to be the citizen’s guide to the future. But post–election night, we’re a little lost about what will happen next, too.

Last week, former State Department official and whistleblower John Napier Tye wrote about the risks of giving a President Trump access to the National Security Agency’s expansive surveillance capabilities. Will tech companies, politicians, journalists, and ordinary Americans now start to think differently about data security?

Although Trump was by far the biggest election upset, we also noticed the quieter shake-up in a Silicon Valley district, where “tech candidate” Ro Khanna unseated eight-term Rep. Mike Honda. As Will Oremus wrote recently, though the 40-year-old Khanna is not a technologist, he “managed to style himself as a Silicon Valley candidate thanks to his youthful zeal, his close ties with the tech industry, and a platform that centers on ‘21st-century’ education and job creation.” Might he be a bellwether for future Silicon Valley politics?

Though the election may be on the brain, this month’s Futurography course also contemplates American values and control—specifically the power the United States holds over the internet. In case you missed it, here’s our handy introduction and cheat sheet, plus a (largely) jargon-free guide to ICANN, the oft-villainized internet governance organization that makes it possible for you to use the web. (It comes complete with Space Jam and Geocities references for some much-needed levity.)

Here are some other stories to focus your eyes on instead of staring off in shock:

  • Clickbait profiteers: Will Oremus takes us into the bizarre and booming Balkan-based cottage industry that made bank on spreading pro-Trump propoganda—and how Facebook may have fueled their success.
  • The Google: It’s hard to miss the “in the news box” above Google’s organic web results—but how does the search engine actually pick which stories and outlets to display? Daniel Trielli and Nicholas Diakopoulos of the University of Maryland’s Computational Journalism Lab explain.
  • Augmented reality: Pokémon Go got tens of thousands of people to go outdoors and chase imaginary monsters. Could we use similar networked games to encourage altruistic behavior in the aftermath of disasters like hurricanes?

Events:

  • As civilization faces the crises of the world we transformed—climate change, ocean acidification, mass extinction, resource shortages—does humankind stand a chance? Join Future Tense on Nov. 16 at 6 p.m. in Washington, D.C., as we talk with David Biello, author of The Unnatural World: The Race to Remake Civilization in Earth’s Newest Age, about the people and strategies that should give us hope that we can cultivate a better future. RSVP to attend in person or stream online here.
  • RESCHEDULED: Will the internet always be American? On Tuesday, Jan. 24, Future Tense will host a live event in Washington, D.C., to explore who controls the internet. You can RSVP to attend in person or watch online here.

Your fellow citizen,
Kirsten Berg
for Future Tense

Future Tense is a partnership of Slate, New America, and Arizona State University.

Nov. 8 2016 8:05 PM

North Carolina Extends Voting Hours After Technical Difficulties

The North Carolina State Board of Elections extended voting in Durham County after laptops used to confirm voter registrations malfunctioned earlier in the day. Voting was extended for eight precincts by a variety of times ranging from 20 to 60 minutes.

READ MORE STORIES