Summary: Content moderation at scale often involves significant tradeoffs between diverse interests. It is often difficult for those without experience in the field to recognize these competing interests.
Social media services aren't just beholden to their users. They're also at the relative mercy of dozens of competing interests at all times.
Users expect one thing. A bunch of governments expect another. Internal policies and guidelines result in another layer of moderation. Then there are the relatively straightforward obligations platforms must fulfill to retain their safe harbors under the DMCA.
So what happens when all of these competing interests collide? Well, according to multiple studies, the most common side effect is over-moderation: the deletion of content that's not in violation of anything, just in case.
For the past half-decade, Stanford Law School's Daphne Keller has been tracking platforms' responses to external stimuli: the pressures applied by outside interests that -- for good or evil -- want social media services to expand their moderation efforts.
And for most of that half-decade, Keller has seen "good faith" efforts expand past the immediate demands to encompass preemptive removal of content that has yet to offend any one of the hundreds of stakeholders applying legal pressure to US-based tech companies.
The research shows large companies are just as preemptively compliant as smaller companies, even though smaller companies have much more at risk.
The easiest, cheapest, and most risk-avoidant path for any technical intermediary is simply to process a removal request and not question its validity. A company that takes an “if in doubt, take it down” approach to requests may simply be a rational economic actor. Small companies without the budget to hire lawyers, or those operating in legal systems with unclear protections, may be particularly likely to take this route.
Multiple studies are cited, and they appear to reach the same conclusion, whether it involves a platform with millions of users or a small group catering to a niche audience: when it doubt, take it out.
Decisions to be made by platforms:
- Should a premium be placed on protecting user content in the face of vague takedown demands?
- Does protecting users from questionable takedown demands result in anything more quantifiable than "goodwill?"
- Are efforts being made to fight back against mistargeted or unlawful content removal requests? Is the expense/liability exposure too costly to justify defending users against unlawful demands from outside entities?
Questions and policy implications to consider:
- Do platforms ultimately serve their users' interests or the more powerful interests applying pressure from the outside?
- Is staying alive to "fight another day" ultimately of more use to platform users than taking a stand that might result in being permanently shut down?
- Is it wise to attempt to satisfy all stakeholders in content moderation issues? Should platforms choose a side (users v. outside complainants) or is it wiser to "play the middle" as much as possible?
- Are there fungible advantages to deciding users are more important than outside entities who may have the power to dismantle services specializing in third-party content?
Resolution: The war between users and outside interests continues. As pressure mounts to moderate more and more content, users are often those who feel the squeeze first. The larger the platform, the higher the demands. But larger platforms are more capable of absorbing the costs of compliance. Smaller ecosystems need more protection but are often incapable of obtaining the funds needed to fight legal battles on the behalf of their users.
True balance is impossible to achieve, as this research shows. Unfortunately, it appears preemptive removal of content remains the most cost effective way of satisfying competing moderation demands, even if it ultimately results in some loss to platforms' user bases.
Written by The Copia Institute, April 2021