TikTok Provides an Update on its Approach to Hate Speech and Offensive Content


TikTok has provided an update on its efforts to remove hate speech and offensive content on its platform, while also, strangely, taking shots at other social platforms for their efforts on the same in times past.

First off, on TikTok’s own efforts – the platform says that since the start of 2020, it’s removed more than 380,000 videos in the US for violating its hate speech policies.

As per TikTok:

“We’ve also banned more than 1,300 accounts for hateful content or behavior, and removed over 64,000 hateful comments. To be clear, these numbers don’t reflect a 100% success rate in catching every piece of hateful content or behavior, but they do indicate our commitment to action.”

Without a relative comparison, it’s hard to know what those figures actually represent, but they do show that TikTok is taking action on such, and working to address some key areas of concern – which is critically important when you also consider that more than a third of its daily users in the US are under 14 years of age.

TikTok says that it uses a range of measures to both detect and to limit the spread of hate speech, including re-directing people who search for offensive content to its guidelines and rules:  

For instance, if someone searches for a hateful ideology or group, such as “heil Hitler” or “groyper,” we take various approaches to stop the spread of hate, including removing related content, refraining from showing results, or redirecting the search to our Community Guidelines to educate our community about our policies against hateful expression. It’s not a fail-safe solution, but we work to quickly apply this approach to hate groups as they emerge.” 

TikTok also notes that its evolving its policies in line with regional and inter-community usage:

“If a member of a disenfranchised group, such as the LGBTQ+, Latinx, Asian American and Pacific Islander, Black, and Indigenous communities, uses a slur as a term of empowerment, we want our moderators to understand the context behind it and not mistakenly take the content down. On the other hand, if a slur is being used hatefully, it doesn’t belong on TikTok. Educating our content moderation teams on these important distinctions is ongoing work, and we strive to get this right for our users.”

Which is particularly interesting given TikTok’s past controversies around its moderation decisions.

Earlier this year, The Intercept published sections of TikTok’s internal guidelines, which included instructions for moderators to suppress posts created by users that they deemed “too ugly, too poor, or too disabled” for the platform. The view was that by promoting content from such users, it would not help to keep viewers engaged with the platform. Moderators were also instructed to censor political speech in certain contexts.

TikTok has since explained that these guidelines were designed for use within China, and did not apply to TikTok, as such, but Douyin, the local, Chinese version of the app. Yet, even so, given the platform’s own past stances on such, it seems a bit contradictory for it to be presenting itself as a bastion in the battle against hate speech.

But then, I guess that’s not necessarily what TikTok is trying to do – the details outline its progression in addressing such content, which is a positive. It just feels a little rich, coming from a platform which, in the past, had specifically implemented rules that unfairly suppress certain content.

Then there’s this statement:

“We also actively work to learn and get feedback from experts, like those on our Content Advisory Council and civil society organizations. Our industry hasn’t always gotten these decisions right, but we are committed to learning from the mistakes of others’ – and our own.” 

Again, it feels a bit rich for TikTok to be pointing the finger at ‘others’ and noting out that they don’t always get it right. Given, again, TikTok’s past history of controversial moderation decisions – including restricting the reach of content from users that appear to have autism, Down’s syndrome or facial disfigurements. 

In this sense, TikTok’s approach feels a bit like deflection, re-focusing people’s attention on the industry, as a whole, in order to soften the perceptions of its own app.

And overall, the notes here are good, they show that TikTok is working to address dangerous content, and it is looking to tackle hate speech and other concerns. But the wording also feels like part of TikTok’s broader effort to re-frame itself as a place of pure positivity and inspiration. ‘The last sunny corner of the internet’, in its own words  

The overall approach here is important, as is TikTok’s policy evolution. But it’s equally relevant to note TikTok’s own recent history of controversy around the same elements.

Source link