Social media services respond when recordings of shooting are uploaded by the person committing the crimes (August 2015)

Platforms respond to a shooting video and its rapid proliferation.

Summary:

The ability to instantly upload recordings and stream live video has made content moderation much more difficult. Uploads to YouTube have surpassed 500 hours of content every minute (as of May 2019), making any form of moderation inadequate.

The same goes for Twitter and Facebook. Facebook’s user base exceeds two billion worldwide. Over 500 million tweets are posted to Twitter every day (as of May 2020). Algorithms and human moderators are incapable of catching everything that violates terms of service.

When the unthinkable happens — as it did on August 26, 2015 — these two social media services swiftly responded. But even their swift efforts weren’t enough. The videos posted by Vester Lee Flanagan, a disgruntled former employee of CBS affiliate WDBJ in Virginia, showed him tracking down a WDBJ journalist and cameraman and shooting them both.

Blurred image of shooting

Both platforms removed the videos and deactivated Flanagan’s accounts. Twitter’s response took only minutes. But the spread of the videos had already begun, leaving moderators to try to track down duplicates before they could be seen and duplicated yet again. Many of these ended up on YouTube, where moderation efforts to contain the spread still left several reuploads intact. This was enough to instigate an FTC complaint against Google, filed by the father of the journalist killed by Flanagan. Google responded by stating it was still removing every copy of the videos it could locate, using a combination of AI and human moderation.

Users of Facebook and Twitter raised a novel complaint in the wake of the shooting, demanding “autoplay” be opt in — rather than the default setting — to prevent them from inadvertently viewing disturbing content.

Moderating content as it is created continues to pose challenges for Facebook, Twitter, and YouTube — all of which allow live-streaming.

Decisions to be made by social media platforms:

  • What efforts are being put in place to better handle moderation of streaming content?
  • What efforts — AI or otherwise — are being deployed to potentially prevent the streaming of criminal acts? Which ones should we adopt?
  • Once notified of objectionable content, how quickly should we respond?
  • Are there different types of content that require different procedures for responding rapidly?
  • What is the internal process for making moderation decisions on breaking news over streaming?
  • While the benefits of auto-playing content are clear for social media platforms, is making this the default option a responsible decision — not just for potentially-objectionable content but for users who may be using limited mobile data?

Questions and policy implications to consider:

  • Given increasing Congressional pressure to moderate content (and similar pressure from other governments around the world), are platforms willing to “over-block” content to demonstrate their compliance with these competing demands? If so, will users seek out other services if their content is mistakenly blocked or deleted?
  • If objectionable content is the source for additional news reporting or is of public interest (like depictions of violence against protesters, etc.), do these concerns override moderation decisions based on terms of service agreements?
  • Does the immediate removal of criminal evidence from public view hamper criminal investigations? 
  • Are all criminal acts of violence considered violations of content guidelines? What if the crime is being committed by government agents or law enforcement officers? What if the video is of a criminal act being performed by someone other than the person filming it? 

Resolution:

All three platforms have made efforts to engage in faster, more accurate moderation of content. Live-streaming presents new challenges for all three platforms, which are being met with varying degrees of success. These three platforms are dealing with millions of uploads every day, ensuring objectionable content will still slip through and be seen by hundreds, if not thousands, of users before it can be targeted and taken down.

Content like this is a clear violation of terms of service agreements, making removal — once notified and located — straightforward. But being able to “see” it before dozens of users do remains a challenge.


Written by The Copia Institute, June 2020

Copia logo