A BBC Panorama investigation has claimed that TikTok is slow to act against adults who engage children in sexually explicit conversations, despite such conversations being flagged by users. The allegations follow similar claims made last year by a previous BBC investigation, suggesting that the social media company is still failing to address the perception that its network facilitates child predators.
TikTok ‘Slow’ To Act On Predators
Working with a 23-year-old woman who produces TikTok videos for an internet search company, the investigative programme created a mock account of a 14-year-old girl. Pictures of the 23-year-old were edited to make her look younger, while her posts were given hashtags (e.g. “#schoollife”) to indicate that she was under the age of consent.
Her TikTok account was soon followed by a number of older men, with a 34-year-old sending her a sexually explicit message, even after the women had told him she was 14.
The ‘girl’ then reported the user and his comments to TikTok. However, the social media company took action only after BBC Panorama contacted it and provided details of its investigation, some four days after the initial report.
Tiktok’s initial response to BBC Panorama was the following: “A report about a user’s account or comments will not generally trigger a review of direct messages”.
The social media company also explained that, because the report “was against the account in general, not the specific direct messages,” no action was taken.
But following the completion of BBC Panorama’s investigation, TikTok told the BBC that two accounts–as well as the devices used with them–have been permanently banned.
TikTok has also told me it’s working “continuously to make TikTok a hostile environment for predatory behaviour.” A spokesperson for the company says it’s the only platform that disables direct messaging for under-16s, that it allows direct messages between people over-16 only when they agree to follow each other, and that it prohibits the sharing of images and videos via direct messaging, regardless of age.
“We are already reviewing how to enhance the way we review user reports,” the spokesperson adds. “There is no such thing as ‘job done’ when it comes to protecting teenagers from online harms. That is why we are working with industry experts, NGOs and online safety specialists, as well as investing in our technology, processes and people, to continuously strengthen safety on TikTok.”
Moderation At A Distance
Due to be broadcast in the UK on BBC One, the episode of BBC Panorama also sees investigators speak to a former ‘content moderator’ who worked at TikTok’s London office. His job was to ensure that users abided by TikTok’s Terms of Service and Community Guidelines.
He says that, during his time at the company, TikTok’s Chinese headquarters made key decisions regarding content moderation, leaving him and his colleagues largely powerless to address problems such as sexual predation.
He told BBC Panorama, “It felt like not very much was being done to protect people. Beijing were hesitant to suspend accounts or take stronger action on users who were being abusive. They would pretty much only ever get a temporary ban of some form, like a week or something.”
He also says that he and moderators on his team didn’t have the ability to ban accounts, and that they’d have to ask TikTok’s Beijing office for permission to permanently suspend profiles.
In March, TikTok announced it would be transitioning away from using moderation staff based in Beijing. The company told BBC Panorama it invests “heavily in automated moderation.” It also says it has an “ever growing expert team” of more than 10,000 moderators in 20 countries, “who review and take action against content and accounts that violate” its policies.
The former content moderator also told BBC Panorama that, when he was working at TikTok, the company’s algorithms effectively supplied sexual predators with suggestive content.
“The algorithm will feed you what you interact with so if you’re looking at a lot of kids dancing sexually, and you interact with that, it’s going to give you more kids dancing sexually,” he said.
To test this claim, BBC Panorama set up another fictitious account, this time for a 36-year-old man. Whenever presented with images of young girls in school uniform, ‘the man’ liked them and watched videos to the end.
Within half an hour, Panorama reports that his “For You” page was filled with images of under-age teens.
TikTok states that its Community Guidelines make it clear that it doesn’t allow the sexualisation of minors on its platform, with sexualised content being blocked from appearing in the For You feed. “We use a combination of technology and moderation teams to identify and remove content that breaches our guidelines,” its spokesperson adds.
It’s also worth pointing out that TikTok isn’t necessarily a special case in this respect. A similar complaint can be made against other social media platforms, with YouTube being another network that has been found to feed potentially provocative content to possible predators. This is a very hard problem to solve, since even if platforms remove tags from videos, algorithms tend to recommend videos which have been liked together.
Still, if social media companies can become more proactive in responding to complaints and removing potentially problematic accounts (or content), it’s a problem that may be eased to a significant extent. But with social media use expanding rapidly in the wake of the coronavirus pandemic, users–and the parents of users–will also need to remain vigilant.
This article has been updated to include comment from TikTok.