TikTok limits the visibility of content created by certain types of users to protect them from being bullied.
Summary: TikTok, like many social apps that are mainly used by a younger generation, has long faced issues around how to deal with bullying done via the platform. According to leaked documents revealed by the German site Netzpolitik, one way that the site chose to deal with the problem was through content suppression -- but specifically by suppressing the content of those the company felt were more prone to being victims of bullying.
The internal documents showed different ways in which the short video content that TikTok is famous for would be rated for visibility. This could include content that was chosen to be “featured” (i.e., seen by more people) but also content that was deemed “Auto R” for a form of suppression. Content rated as such was excluded from the “for you” feed on Tiktok after reaching a certain number of views. The “for you” feed is how most people view TikTok videos, so this rating would effectively put a cap on views. The end result was the “reach” of content categorized as Auto R was significantly limited, and completely prevented from going “viral” and amassing a large audience or following.
What was somewhat surprising was that TikTok’s policies explicitly suggested putting those who might be bullied in the “Auto R” category -- even saying that those who were disabled, autistic, or with Down Syndrome, should be put in this category to minimize bullying.
According to Netzpolitik, employees at TikTok repeatedly pointed out the problematic nature of this decision, and how it was discriminatory itself and punishing people not for any bad behavior, but because of the belief that their differences might possibly lead to them being bullied. However, they claimed that they were prevented from changing the policies by TikTok’s corporate parent, ByteDance, which dictated the company’s content moderation policies.
Decisions to be made by TikTok:
- What are the best ways to deal with and prevent bullying done on the platform?
- What are the real world impacts of suppressing the viral reach of any content based on the type of person making the content?
- Is it appropriate to effectively prevent those you think will be bullied from getting full access to your platform to prevent the possibility of bullying?
- What data points are being assessed to justify the assumptions being made about “Auto R” being an effective anti-bullying tool?
Questions and policy implications to consider:
- When there are strong pushes from policymakers to platforms that they need to “stop bullying” will it lead to unintended consequences like the effective minimization of access to these platforms by potential victims of bullying, rather than dealing with the root causes of bullying?
- Will efforts to prevent a bad behavior be used to really sweep that activity under the rug, rather than looking at how to actually make a platform safer?
- What is the role of technology intermediaries in preventing bad behavior?
Resolution: TikTok admitted that these rules were a “blunt instrument” that were put in place rapidly to try to minimize bullying on the platform -- but that the company had realized it was the “wrong” approach and had implemented more nuanced policies:
"Early on, in response to an increase in bullying on the app, we implemented a blunt and temporary policy," he told the BBC.
"This was never designed to be a long-term solution, and while the intention was good, it became clear that the approach was wrong.
"We have long since removed the policy in favour of more nuanced anti-bullying policies."
However, the Netzpolitik report suggested that this policy had been in place at least until September of 2019, just three months before its reporting came out in December of 2019. It is unclear exactly when the “more nuanced” anti-bullying policies were put in place, but it is possible that they came about due to the public exposure and pressure from the reporting on this issue.