The Industry

YouTube Is Realizing It May Be Bad for All of Us

The video platform has only begun to reckon with its misinformation problems. So far, it seems to have no idea how to solve them.

A staticky YouTube video.
Photo illustration by Slate. Photos by Thinkstock and YouTube.

Almost since its founding, Twitter has regarded itself as a platform for news; watershed moments in the company’s history included the Arab Spring and a 2011 East Coast earthquake. Facebook began talking as early as 2013 about wanting to be a sort of “personalized newspaper,” only to realize more recently that separating real news from propaganda was harder than it had anticipated. Both companies have spent much of the past year reckoning with their roles in spreading hoaxes, facilitating election interference, and fueling societal division and extremism.

Google’s YouTube, in contrast, has until recently managed to skirt the brunt of the criticism that rival social media platforms have faced. As a result, it has been relatively slow to grapple with the implications of its role as a source of news—and, inevitably, of misinformation. As recently as October, in a series of congressional hearings, Google downplayed the significance of its video platform in Russian election interference, while Facebook and Twitter fielded the toughest questions.

Now a series of controversies is forcing YouTube to address its responsibilities more directly and candidly than it has in the past. At the South by Southwest Interactive conference this week in Austin, Texas, an appearance by YouTube CEO Susan Wojcicki made it clear that the platform is finally giving serious thought to the quality and veracity of information its users are getting. It also made clear that the company has a lot of catching up to do.

In January, star YouTube vlogger Logan Paul sparked a backlash when he published a video showing the dead body of an apparent suicide victim in Japan. Google responded by canceling Paul’s “preferred” ad deal and suspending his projects for its premium video service, YouTube Red. That came as the company was already dealing with a series of reports revealing disturbing cartoons apparently aimed at kids (it subsequently deleted 150,000 of them) and another genre of videos depicting children in abusive situations. The latter prompted the company to hire thousands more human moderators to review content for potential policy violations.

But it has since become obvious that YouTube’s problems aren’t simply a matter of manpower. They flow from the company’s confusion about the nature of its business and the extent of its responsibilities.

In February, YouTube’s algorithms boosted conspiracy videos attacking the children who survived the Parkland, Florida, school massacre. This month, influential YouTube conspiracy theorist Alex Jones claimed the company was about to delete his channel, a claim YouTube said was not true. Writing in the New York Times this week, the tech scholar Zeynep Tufekci called YouTube “the great radicalizer,” detailing how its recommendation algorithm carries casual viewers down rabbit holes of extremism.

At South by Southwest on Tuesday, Wojcicki came prepared with a new plan to tackle conspiracy videos on YouTube. But the subsequent reaction suggested she hadn’t come quite prepared enough.

Pointing to slides that displayed YouTube videos purveying conspiracies about the moon landing and chemtrails, Wojcicki said the company will begin displaying “information cues” drawn from Wikipedia alongside such clips. The cues come in the form of little blocks of text directly below the video. To determine which videos require such a counterweight, Wojcicki said, YouTube will draw on Wikipedia’s own list of well-known conspiracy theories. She noted that YouTube could expand on that list over time.

This was all news to the Wikimedia Foundation, the nonprofit that administers the volunteer-edited Wikipedia. “We were not given advance notice of this announcement,” the foundation said in a statement Wednesday, adding that there is no formal partnership between it and YouTube. While it did not explicitly object to YouTube using its content in this way, the nonprofit stressed that it relies on donors and unpaid editors, and that it “does not control content or make editorial decisions about information that is included on Wikipedia.”

In other words, an $800 billion company—arguably the world’s most important information source—is outsourcing the job of combatting misinformation on its platform to a nonprofit built on free labor. And it didn’t bother to tell them.

Even if YouTube had handled this better, it’s not clear that Wikipedia cards will do much to solve its problems. The approach is loosely analogous to one that Facebook began deploying in December 2016 to fight fake news. But Facebook later ditched its initial approach amid concerns that it might be backfiring (a finding that is itself disputed by some). After a year’s worth of research, testing, and trial and error, there’s been little evidence so far that the social network’s efforts have made much difference in the spread of misinformation. That’s one reason Facebook has simultaneously pursued several other avenues to improve the quality and credibility of news and information on its platform—some more promising than others.

One rather glaring problem with YouTube’s plan is that it seems ill-equipped to handle conspiracies that crop up rapidly in the wake of major news events such as the Parkland shooting. Those memes—such as the one smearing Parkland survivor David Hogg as a “crisis actor”—are liable to make the rounds on YouTube well before they’ve been authoritatively debunked on Wikipedia, let alone added to a master list of well-known conspiracies. Even with human moderators on hand, some of these conspiracy theories have scaled to the top of YouTube’s prominent list of trending videos, and earned millions of views in the process, before being taken down.

Another issue: Wikipedia is vulnerable to trolls and propagandists, just as is any other platform that relies on the public to produce and curate content. It might become more so, now that people know Wikipedia holds the keys to YouTube’s conspiracy-debunking apparatus, such as it is. One prominent Wikipedia editor was quick to warn that YouTube’s misinformation problems run deeper than anything that could be solved by an “irregularly updated *encyclopedia*.”

On the same day that Wojcicki was taking the stage in Austin on Tuesday to announce her big fix, a YouTube spokesperson was telling Vice News that the company had no idea why an InfoWars conspiracy video was topping its search results Monday for queries about the package explosions in Austin. The InfoWars video claimed, without evidence, that Antifa was a “prime suspect” in the bombings.

In the same speech, according to tweets by BuzzFeed News’ Ryan Mac, Wired editor-in-chief Nick Thompson asked Wojcicki how YouTube decides which information sources are credible. Wojcicki declined to get specific, saying “there are usually complicated algorithms”—the sort of vague excuse that Facebook shopped for years before critics largely stopped buying it. She did suggest, however, that YouTube looks at the “number of journalistic awards” and “amount of traffic” that various publishers have. If that’s true, those are some awfully crude, extremely amateurish proxies—more so even than Facebook’s widely ridiculed publisher trust survey.

It appears, then, that YouTube is just beginning the journey of reckoning that Facebook was compelled by critics (and its own outraged employees) to undertake almost immediately after the 2016 election. Part of that journey will probably involve learning some humility: Just last month Wojcicki sniped at a media conference in California that Facebook “should get back to baby pictures,” the implication being that it couldn’t handle the grown-up responsibilities of a prominent news platform. One could imagine a similar dig at YouTube, perhaps swapping out baby pictures for cat videos.

The reality is that none of these platforms is going back to its comparatively innocent early days. Social media, for better or worse, has firmly established itself as a primary news source for huge swaths of the population, with younger generations leading that trend. That means YouTube, like Facebook and Twitter, is stuck with the responsibility of figuring out how to do a better job of informing its users. Excuses like the one Google’s Richard Salgado offered to Congress in October—that the company is a “technology platform” and therefore not in the news business—no longer fly.

It’s good to see YouTube finally beginning to acknowledge that it has a problem. The next step is for it to understand that solving that problem won’t be easy, and quick fixes won’t suffice. Facebook’s Mark Zuckerberg indicated in January that he understood fixing his platform will require a sustained, long-term effort. Twitter CEO Jack Dorsey said as much in a tweetstorm earlier this month. To judge from Wojcicki’s recent comments, YouTube isn’t quite there yet.