Technology

The Problem With Silicon Valley’s Playthings

Even YouTube’s service for kids is being abused. Can anything control the massive platforms that now shape our lives?

Lisa Larson-Walker

It’s the stuff of 21st-century nightmares: A child is happily watching videos of his favorite cartoon characters, when suddenly he screams. The characters he’s grown to know and like are killing each other, or killing themselves, or torturing each other in bizarre ways.

That was the lead anecdote in a New York Times story this past weekend about disturbing videos that are slipping past the parental controls on YouTube Kids, YouTube’s wildly popular video app for young children. And it wasn’t an isolated incident. As a long, thoughtful Medium post by the artist and writer James Bridle establishes, there are countless videos on the platform that no parents would want their children watching, ranging from mildly unsettling to profoundly creepy.

It’s not yet clear exactly who’s making these videos or why, though there’s a good chance we’ll learn at least some of that in the coming weeks. While some appear handcrafted to inflict psychological trauma, Bridle notes that many others have the whiff of automation about them, senselessly combining popular keywords with pirated characters in endless variations on a theme. While the phenomenon of creepy kids’ videos is troubling enough in itself, Bridle touches on the deeper problem that it represents. From his post:

We have built a world which operates at scale, where human oversight is simply impossible, and no manner of inhuman oversight will counter most of the examples I’ve used in this essay. The asides I’ve kept in parentheses throughout, if expanded upon, would allow one with minimal effort to rewrite everything I’ve said, with very little effort, to be not about child abuse, but about white nationalism, about violent religious ideologies, about fake news, about climate denialism, about 9/11 conspiracies.

In other words, the problem is not just YouTube Kids. The problem is that we have entrusted big tech companies with everything from our email to our social media feeds to our children’s entertainment. Those big tech companies, in turn, have entrusted ranking, filtering, monitoring, and other key decision-making functions to software programs built on machine-learning algorithms. And those algorithms, we’re gradually learning, are not always worthy of our trust.

Automation brings huge advantages of scale, speed, and price: We now have virtually endless content and information at our fingertips, all organized for us according to (some computer program’s notion of) our personal needs, interests, and tastes. Google, Facebook, Spotify, Amazon, Netflix: All have taken tasks once done by humans (librarians, scrapbookers, DJs, retail clerks, video-store managers—and, let’s not forget, advertising salespeople) and found ways to do them automatically, instantly, and at close to zero marginal cost. As a result, they’re taking over the world, and making enormous profits in the process.

We know these companies’ algorithms aren’t perfect. But they all have talented engineers working constantly to improve the software, and most of what they serve us is good enough to keep us coming back—perhaps more compulsively than we ever thought possible.

Yet each week now seems to bring fresh examples of the ways in which these programs can fail us, sometimes in mundane ways, other times dramatically. Facebook proved fertile ground for fake news and helped Russia meddle in the U.S. election by ranking divisive political content high in users’ feeds. Twitter’s laissez-faire approach to abuse and harassment allowed deceptive bots to flourish and sow a cacophony. Even Spotify is being gamed by opportunists looking to fool users into playing ersatz songs.

What’s happening now on YouTube Kids is just a more explicitly toxic version of that same issue. Whenever you find an algorithm making high-stakes decisions with minimal human supervision—that is, decisions that determine whose content is widely viewed, and therefore who makes money—you will find cottage industries of entrepreneurs devising ever subtler ways to game it.

Whenever an example of this comes to light, the companies are quick to point out that it’s relatively rare. At last week’s congressional hearings on Russian meddling and social media, Facebook’s general counsel returned time and again to a talking point about how Russian political posts amounted to a tiny fraction of all posts on the platform. And in response to the New York Times piece about YouTube Kids, the company’s spokesman called the inappropriate videos “the extreme needle in the haystack.”

The analogy of the needle in the haystack is at once misleading and weirdly apt. The misleading part is this: A needle in the haystack is something that’s almost impossible to find—yet YouTube’s software placed this content in front of kids who weren’t even looking for it. What’s appropriate about the analogy is that a needle is a really bad thing to have in a haystack. Serve enough hay to enough cattle, and some of them are bound to eat the needles, with potentially unfortunate effects. If you’re in the hay-serving business, then, it would behoove you to keep your haystacks meticulously needle free. That’s exactly what YouTube isn’t doing.

To carry this analogy a little farther than it was probably ever meant to go: If you were a farmer, and your hay provider was serving you hay laced with needles, would you accept the excuse that the hay was 99 percent needle free, or even 99.9 percent needle free? How about if this hay provider assured you that the needles weren’t placed there maliciously, but were rather an unfortunate side effect of an otherwise highly efficient mechanized hay-sorting process? Well,  perhaps you would accept that, if you valued cheap hay on a grand scale more than you valued the well-being of your cattle. But you’d probably object more strenuously if you found a needle in your child’s cereal.

And that’s where we return to Bridle’s point. The YouTube Kids videos represent a case where kids found needles in their cereal, and of course it’s a scandal. But the greater scandal is that we’re turning over more and more aspects of our lives to the same kinds of algorithms that failed to sift the needles from the cornflakes. The tech companies want us to treat these mishaps as rare, isolated incidents, to be diagnosed and treated on an ad hoc basis. They’ll hire 3,000 poorly paid contractors to scan live videos for on-screen killings, or 10,000 poorly paid contractors to scan ads for foreign election tampering—while the rest of the machinery churns on apace.

At some point, we have to step back and look at the whole system and ask whether we’re willing to accept shoddy quality control as the price of convenience. Because the evidence is mounting that systems on the scale of those that our largest tech companies have created are, as my colleague April Glaser recently argued, too large to be effectively monitored. Their whole businesses are built on the premise that algorithms can make decisions on a scale, and at a speed, that humans could never match. Now they’re pledging to fix those algorithms’ flaws with a few thousand contractors here or there. The numbers don’t add up.

Google, Facebook, and their competitors have built enormously successful companies, in part by earning our trust. Not our full trust, perhaps, but enough that the vast majority of us have opted to let these companies surveil practically our every online move and keep records of it.  Now they’re in danger of losing that trust. There are signs they’re beginning to recognize the seriousness of that problem. So far, however, there are no signs that it’s solvable.

Update, 7:25 p.m.: A YouTube spokesperson provided the following statement: 

The YouTube Kids team is made up of parents who care deeply about this, so it’s extremely important for us to get this right, and we act quickly when videos are brought to our attention. We use a combination of machine learning, algorithms and community flagging to determine content in the app as well as which content runs ads. We agree this content is unacceptable and are committed to making the app better every day.

The company also clarified that some of the examples referred to in Bridle’s Medium post came from YouTube proper, rather than the YouTube Kids app. The examples in the New York Times story were all from the app.