A day after putting Parler on notice of a potential ban due to serious App Store guideline violations, Apple has suspended Parler until the makers of the app solve its content moderation challenges.
“We have always supported diverse points of view being represented on the App Store, but there is no place on our platform for threats of violence and illegal activity,” Apple said in a statement provided to me via email. “Parler has not taken adequate measures to address the proliferation of these threats to people’s safety. We have suspended Parler from the App Store until they resolve these issues.”
Note: Apple has suspended Parler, not banned it. That means that if Parler can implement content moderation standards that meet the App Store guidelines, Apple will likely reinstate the app.
Apple supplied examples of Parler content that failed to meet App Store guidelines in its warning yesterday, and more are readily available. Here are a few that I found just today, including one using the N word that was shared to the ParlerWatch subreddit.
Apple’s full email to Parler:
“To the developers of the Parler app,
“Thank you for your response regarding dangerous and harmful content on Parler. We have determined that the measures you describe are inadequate to address the proliferation of dangerous and objectionable content on your app.
“Parler has not upheld its commitment to moderate and remove harmful or dangerous content encouraging violence and illegal activity, and is not in compliance with the App Store Review Guidelines.
“In your response, you referenced that Parler has been taking this content “very seriously for weeks.” However, the processes Parler has put in place to moderate or prevent the spread of dangerous and illegal content have proved insufficient. Specifically, we have continued to find direct threats of violence and calls to incite lawless action in violation of Guideline 1.1 – Safety – Objectionable Content.
“Your response also references a moderation plan “for the time being,” which does not meet the ongoing requirements in Guideline 1.2 – Safety – User Generated content. While there is no perfect system to prevent all dangerous or hateful user content, apps are required to have robust content moderation plans in place to proactively and effectively address these issues. A temporary “task force” is not a sufficient response given the widespread proliferation of harmful content.
“For these reasons, your app will be removed from the App Store until we receive an update that is compliant with the App Store Review Guidelines and you have demonstrated your ability to effectively moderate and filter the dangerous and harmful content on your service.”
“Regards, App Review Board”
This is a breaking story; refresh for more.