Summary: On June 5, 2019, YouTube announced it would be stepping up its efforts to remove hateful content, focusing on the apparent increase of white nationalist and pro-Nazi content being created by users. This change in algorithm would limit views of borderline content and push more viewers towards content less likely to contain hateful views. The company's blog post specifically stated it would be removing videos that "glorified Nazi ideology."
Unfortunately, when the updated algorithm went to work removing this content, it also took down content that educated and informed people about Nazis and their ideology, but quite obviously did not "glorify" them.
Ford Fischer -- a journalist who tracks extremist and hate groups -- noticed his entire channel had been demonetized within "minutes" of the rollout. YouTube responded to Fischer's attempt to have his channel reinstated by stating multiple videos -- including interviews with white nationalists -- violated the updated policy on hateful content.
A similar thing happened to history teacher Scott Allsop, who was banned by YouTube for his uploads of archival footage of propaganda speeches by Nazi leaders, including Adolph Hitler. Allsop uploaded these for their historical value as well as for use in his history classes. The notice placed on his terminated account stated it had been taken down for "multiple or severe violations" of YouTube's hate speech policies.
Another YouTube user noticed his upload of 1938 documentary about the rise of the Nazi party in Germany had been taken down for similar reasons, even though the documentary was decidedly anti-Nazi in its presentation and had obvious historical value.
Decisions to be made by YouTube:
- Should algorithm tweaks be tested in a sandboxed environment prior to rollout to see how often they're flagging content that doesn't actually violate policies?
- Given that this sort of mis-targeting has happened in the past, does YouTube have a response plan in place to swiftly handle mistaken content removals?
- Should additional staffing be brought on board to handle the expected collateral damage of updated moderation policies?
Questions and policy implications to consider:
- Should there be a waiting period on enforcement that would allow users with flagged content to make their case prior to being hit by enforcement methods like demonetization or bans?
- Should YouTube offer some sort of compensation to users whose channels are adversely affected by mistakes like these?
- Should users whose content hasn't been flagged previously for policy violations be given a benefit of a doubt when flagged by automated moderation efforts?
Resolution: In most cases, content mistakenly targeted by the algorithm change was reinstated within hours of being taken down. In the case of Ford Fischer, reinstatement took longer. And he was again demonetized by YouTube in early 2021, apparently over raw footage of the January 6th riot in Washington, DC. Within hours, YouTube had reinstated his account, but not before drawing more negative press over its moderation problems.
Written by The Copia Institute, March 2021