YouTube plans to hire hundreds of monitors to look for inappropriate content.

YouTube Plans to Have More Than 10,000 People Dedicated to Monitoring Content. Will It Work?

YouTube Plans to Have More Than 10,000 People Dedicated to Monitoring Content. Will It Work?

Future Tense
The Citizen's Guide to the Future
Dec. 5 2017 4:57 PM

Can YouTube Solve Its Moderation Problems by Hiring Hundreds of People?

FRANCEINTERNETTECHNOLOGYLEWEB12
Can YouTube solve the content moderation conundrum?

ERIC PIERMONT/AFP/Getty Images

YouTube announced on Monday that it will be expanding the staff of its content moderation and rule enforcement team to more than 10,000 people by the end of 2018, which represents a 25 percent increase, according to BuzzFeed. This decision comes after a number of controversies concerning children’s safety on the platform—for those who watch the content as well as those who appear in videos. Yet as other platforms have also struggled with the delicate task of protecting viewers while also ensuring free speech, the big question next year for YouTube is whether this will work.

Multiple reports last month revealed how nightmarish videos have been able to sneak past parental filters on YouTube Kids, an app that is supposed to weed out inappropriate content for preschool and school-aged viewers. Some of the offending videos include characters from popular TV cartoons killing and torturing each other. The controversy was flipped on its head last week when YouTube moved to ban a channel called ToyFreaks, which was run by a father who was filming his daughters in distressing situations that some claimed could amount to abuse. Then, on top of it all, users reported that the site’s autofill feature was suggesting pedophile-oriented search terms.

Advertisement

“Our goal is to stay one step ahead of bad actors, making it harder for policy-violating content to surface or remain on YouTube,” CEO Susan Wojcicki wrote in a post announcing the expansion. She further indicated that this growing regulatory division will augment its existing efforts to manually monitor and remove videos and to also train its machine learning software to better identify content that violates the platform’s rules. Beyond videos that threaten children’s well-being, the reviewers will also be focusing on violent and extremist content.

Other social media sites have also wrestled with content moderation as of late, fielding complaints of lackluster enforcement. Facebook, which according to a series of announcements since the summer plans to grow its community standards team to about 8,750 people, has been called out on numerous occasions for letting footage of murder and rape remain on the site for extended periods of time. Twitter, which reportedly employs about 3,200 people, received criticism last week when the platform did not take down an apparent snuff film that President Trump had retweeted.

On the flipside, platforms have also gotten backlash for what some see as heavy-handed censorship. These complaints, though, tend to fall into the realm of determining what is considered hate speech or abuse of other users rather than classifying videos as disturbing and offensive. For example, women have been calling out Facebook over the past week for suspending them for posts like “men are scum,” while still allowing racist and sexist slurs to stand in some instances. Twitter earlier this month also came under scrutiny for blocking search results with LGBTQ-related terms such as bisexual and queer.

Given the immense quantity of content on these platforms—300 hours of video are uploaded onto YouTube every minute—it would be a fool’s errand to try to create a moderation system that exclusively relies on humans to make removal decisions. Plus, employees who have to personally view extremely disturbing content in order to properly screen it can experience lasting psychological injuries; a group of moderators sued Microsoft in January alleging that they were suffering from PTSD as a result of watching child abuse and other sadistic acts.

Yet, machine learning is not advanced enough at the moment to completely automate the process, though A.I. researchers at Facebook are working on software that they say will eventually enable computers to do most of the work. A.I. systems cannot address the complexities of moderation, such as differentiating a joke from something more serious, in an efficient manner that doesn’t unduly stifle free speech.

Most platforms are now using a combination of human and machine moderators, which is imperfect but also seems to be the only realistic option at the moment. With YouTube’s new initiative, however, we’ll be able to see the effectiveness and limits of such an approach.

Future Tense is a partnership of SlateNew America, and Arizona State University.