US President Donald Trump’s COVID-19 diagnosis is one of the biggest stories in the world right now, and as you might expect given the divisive nature of US politics at present, many of the social media posts about Trump’s situation have not been sympathetic.
That’s prompted Twitter to reiterate its rules around wishing harm against others, which it revised back in April to incorporate threats of serious bodily harm or fatal disease against anybody, including the President.
As per Twitter:
“We’ve taken significant steps to address Tweets that violate our policies on abuse without people having to report it, with more than 50% being caught through automated systems.”
Which sounds positive – yet a simple Twitter search for ‘hope he dies’ still uncovers a broad range of tweets which, under these regulations, should be removed.
Which highlights the difficulty of Twitter’s position, and indeed, the challenge that all social platforms face in policing what is and is not acceptable in common speech.
The great promise of social media platforms lies in giving everybody a voice, a platform from which to be heard, which enables people from all walks of life to connect and share. That, theoretically, should facilitate greater understanding and empathy – if everybody has a voice, then we can hear from all perspectives and broaden our world through online conversation.
This is the idealogical concept, but as we’ve seen, the reality is actually far from this utopic vision.
The flip side of this is that by giving everyone a voice, you also, inadvertently, amplify the negative. Dangerous conspiracy theories have more opportunity to take root in the minds of those open to such ideas, niche ideology can flourish by branching out to diverse, disparate, and once disconnected groups. Once you provide a means for more voices to be heard, you also allow more radical, left of center groups to expand, and that can have dangerous consequences, in varying form.
Which is why platforms need rules. But who decides what’s acceptable and what’s not? Who decides what’s true and what isn’t?Â
The longer these counter-culture groups are allowed to expand, the stronger they grow, and the more questions are raised as to who’s in charge, and who should be, and what can be done to correct the balance.
Which leaves social platforms in a difficult position. Now, rather than just facilitating connection and discussion, they also need to consider the implications of such, and police conversations accordingly. Which then limits connection, and some would say, impedes on free speech.
But what else can they do? Allowing outright hate speech is clearly not acceptable, but what about speech that’s just a little hateful? What about content that’s a just little divisive, which allows for some division to still slip through?
And when you do draw the line, how can you effectively police such, when there are so many variations on how people can share such messages?
The situation once again underlines the complex balance that social platforms now need to maintain in order to facilitate connection without providing a platform for negativity. Which is almost impossible to do – and while, right now, the focus is on the US President, there are going to be many more situations of this type in future, where platforms need to not only draw a line in the sand, but also decide where that line, exactly, should be placed.
Giving everybody a platform comes with significant risks. Is it even possible to lessen them without limiting expression?Â
Some have even questioned whether social platforms should interfere at all, as people can choose to participate or not. But by providing a means for people to amplify their messages to millions, even billions of people, the platforms do indeed play a role in such, and have a responsibility to limit negative impacts where they can.Â
But there are no easy answers. Increased moderation, third-party fact-checking, external oversight groups to assist in content rulings. All of these are important, valuable elements, but none can ensure the elimination of dangerous movements, misinformation, misrepresentation and the like.
People are still going to tweet things that are against the rules, and those tweets are still going to be seen, and people are still going to respond, both emotionally and physically, even if that tweet is later removed.Â
No system can stop all of these comments from being seen. So what then? How do we move forward in an increasingly divided world when social platforms continue to facilitate a means for these messages to spread? Â
Can it be fixed? Would we be better off without social platforms, with more editorial gatekeepers slowing the spread of such comments? Or has such division always existed and we’re only now being more exposed to it, and we now have a means to address such by getting it all out in the open?
These will be key questions for social media platforms moving forward, especially in the wake of the coming US election.Â