Future Tense

Live Video Platforms Should Assume They Will Eventually Host Footage of a Police Shooting

A Facebook user watches the Facebook Live stream Philando Castile’s girlfriend took of his dying moments.

STF/AFP/Getty Images

On Wednesday, 32-year-old Philando Castile, a black man from Falcon Heights, Minnesota, was shot and killed by a police officer during a traffic stop. From the front passenger seat, his girlfriend began filming a Facebook Live video of Castile bleeding in the driver’s seat and a police officer pointing a gun at Castile and talking. The video went on for 10 minutes.

After the stream was over and posted to Facebook for playback, it then went down for at least an hour in the immediate aftermath of the incident. Facebook told the Telegraph, “We’re very sorry that the video was inaccessible. … It was down to a technical glitch and restored as soon as we were able to investigate.” This technical difficulty seems oddly coincidental given the sensitive nature of the video, but it is conceivable, especially if the video was receiving high traffic. It is now labeled, “Warning—Graphic Video. Videos that contain graphic content can shock, offend and upset. Are you sure you want to see this?”

Facebook has used this type of warning skin before, as with footage of the Walter Scott shooting last April. And certainly other social networks have struggled to make the right calls about policing inappropriate content. In 2014, for example, Twitter created controversy when it tried to suppress images of the beheading of journalist James Foley by ISIS. But there’s a whole other dimension when a video was streaming in real-time for anyone to see and then is later covered with a warning. Streaming services like Facebook Live, which launched in 2015 for celebrities and in April for all users, and Periscope, which is owned by Twitter and started in 2015, show events as they happen. As such they bring an additional complication to the already fraught question of how social networks should react, if at all, to controversial user-generated content.

A service like Facebook Live has a user base of more than 1.6 billion daily users. Though an individual person may not be able to instantly capitalize on this audience, footage that is societally significant, like the Philando Castile shooting video, can spread quickly. A service like Periscope has a smaller initial base. Bloomberg estimated in June that Twitter has less than 140 million active users every day. It’s still millions of people and can certainly surface important footage, but the scale is different.

The services also take different approaches to graphic video. Facebook writes in its Community Standards:

Facebook has long been a place where people share their experiences and raise awareness about important issues. Sometimes, those experiences and issues involve violence and graphic images of public interest or concern, such as human rights abuses or acts of terrorism. In many instances, when people share this type of content, they are condemning it or raising awareness about it. We remove graphic images when they are shared for sadistic pleasure or to celebrate or glorify violence.

When people share anything on Facebook, we expect that they will share it responsibly, including carefully choosing who will see that content. We also ask that people warn their audience about what they are about to see if it includes graphic violence.

This commentary focuses mostly on how to share content and talks less about what Facebook will do if it feels that a user isn’t meeting these standards. Periscope’s Community Guidelines are similar, but put less emphasis on the inevitability of graphic content.

Periscope is intended to be open and safe. To maintain a healthy platform, explicit graphic content is not allowed. Explicit graphic content includes, but is not limited to, depictions of child abuse, animal abuse, or bodily harm. Periscope is not for content that is intended to incite violence, or includes a direct and specific threat of violence to others. Periscope reserves the right to allow sensitive content when it is artistic, educational, scientific or newsworthy.

Given the data they host, content-sharing platforms have a subtle but deep power. As Motherboard wrote Thursday, “Facebook has become the self-appointed gatekeeper for what is acceptable content to show the public, which is an incredibly important and powerful position to be in.” It may not have been obvious at first, but it’s been recognizable for years now, and it’s time for companies to own it and make their positions plain.

When someone begins to record and stream an in-progress terrorist attack, he or she doesn’t have time to research which site will most value this type of contribution. Facebook’s Community Standards seem to imply that the company is open to supporting content that promotes transparency, while Periscope’s guidelines are more hesitant. For Facebook, it’s time to walk that walk so its users can get a better sense of what to expect. Companies have the right to promote whatever values they want—but they need to make these attributes prominent in their brands and consistent in their application. That way, consumers can make informed choices about where to take their data when it counts the most.