Will the events of this week mark a turning point for social media platforms and their approach to content moderation and censorship, particularly around dangerous movements and politically motivated debate?
Following the riots at the Capitol building on Wednesday, which had been instigated, at least in part, by US President Donald Trump, who had called on his supporters to mobilize, and even fight for him, in a last-ditch attempt to overturn the election result, all of the major social platforms took action against the President and his avid supporters, in varying ways.
Many believe that these actions were long overdue, given Trump’s history of using social media as a megaphone for his divisive agenda, but others have rightly noted that this was the first time that the resulting violence could be directly tied to social media itself.
There’s been a range of previous movements, like QAnon, which have been linked to criminal acts, but the actual role that social media has played in facilitating such has been up for debate. That’s somewhat similar to Russian interference in the 2016 US Election – we know now that Russian-based groups did seek to interfere, and influence voter actions in the lead up to the poll. But did those efforts actually work? Did people really change their voting behavior as a result? The actual impact is difficult to accurately measure.
This week’s Capitol riots, however, can be clearly and directly linked to social media activity.
As explained by ProPublica:
“For weeks, the far-right supporters of President Donald Trump railed on social media that the election had been stolen. They openly discussed the idea of violent protest on the day Congress met to certify the result.”
The specific details of their planned action are also tied into Trump’s social posts – for example, after Trump tweeted:
The focus of the protesters turned to VP Pence, with various reports suggesting that they intended to kidnap or hold Pence hostage in order to force Congress to reinstate the President.
This was the first time where the full plan of action, from inception to outcome, was traceable via social posts and activity, with the President actively playing a part in provoking and inciting the mob. Which is why the platforms moved to stronger mitigation efforts in response – but does that mean they’ll look to change how they view similar incidents in future?
In many ways, Trump himself is an anomaly, a hugely popular celebrity turned politician, who then used his celebrity status to share his messaging via social platforms. Having a well-known personality become a politician is not uncommon, so it’s not an entirely new approach, but the way Trump weaponized his social media following was different to what we’ve ever seen.
As Trump himself told Fox Business in 2017:
“I doubt I would be here if it weren’t for social media, to be honest with you. […] When somebody says something about me, I’m able to go bing, bing, bing and I take care of it. The other way, I would never be get the word out.”
In this way, Trump essentially used social platforms as his own propaganda outlet, deriding any stories negative of him and his administration as being ‘fake news’, while any positive coverage was 100% accurate. This lead to various contradictions – one week, The New York Times was ‘the enemy of the people’, publishing false information at will, then the next, when it posted a poll in his favor, it was acceptable once again.
Yet, despite these inconsistencies, Trump’s supporters lapped it up, and over time, he’s been able to use his social media presence to build a cult-like empire, which eventually lead to him being able to effectively incite a coup in an attempt to keep himself in power. This is despite there being no solid evidence to support his claims of massive voter fraud, which, in Trump’s view, invalidates the election result.
Trump’s approach is been different to what we’ve seen in the past – but that doesn’t mean it can’t happen again. And if it does, will the social platforms look to take a tougher stand earlier on? Will they now see that the end result of taking a more ‘hands-off’ approach is more dangerous than actually addressing such issues in their initial stages?
A key example here is QAnon – as far back as 2016, various experts had warned Facebook about the dangers posed by the ‘pizzagate’ conspiracy movement, which had been gaining momentum and support across its platforms. Facebook refused to take action, citing its ‘free speech’ ethos, and that initial seed then evolved into a more organized movement, which then morphed into the QAnon group. An internal investigation conducted by Facebook last year found that the platform had provided a home for thousands of QAnon groups and Pages, with millions of members and followers. And as threats of violence and dangerous activity were increasingly linked to the group, Facebook finally chose to act, first cracking down on QAnon groups in August last year, before announcing a full ban on QAnon-related content in October.
Facebook will argue that it acted based on the evidence it saw, and in line with its evolving approach to such. But it does seem likely that, had Facebook sought to take a stronger stance in those initial stages, QAnon may not have ever been able to develop the momentum that it did. And while QAnon was only one of the many groups at play in the Capitol riot, you could argue that the situation could have been avoided had there been a more concentrated effort to draw a line on misinformation and dangerous speech much earlier in the piece.
Yet at the same time, Facebook has been calling for a more comprehensive approach to such – as noted by Instagram chief Adam Mosseri this week:
“We, at Facebook and Instagram, have been clear for years that we believe regulation around harmful content would be a good thing. That gets tricky when elected officials start violating rules, but is still an idea worth pursuing.”
Facebook itself has implemented its own, independent Oversight Board to assist with content decisions, a team of experts from a range of fields that will help the company implement better approaches to content moderation, and rulings on what should and should not be allowed on its platforms.
The Oversight Board has only just begun, and it’s still an experiment in many respects, and we don’t know what sort of impact it’ll end up having. But Facebook sees this as a micro-example of what the entire industry should be seeking.
Again from Mosseri:
“We’ve suggested third-party bodies to set standards for harmful content and to measure companies against those standards. Regulation could set baselines for what’s allowed and require companies to build systems accordingly.”
In Facebook’s view, it shouldn’t be up to the platforms themselves to rule on what’s allowed in this respect, it should come down to a panel of independent experts to establish parameters for all platforms, in order to ensure uniformity in approach, and lessen the burden of censorship on private organizations – which clearly have different motivations based on business strategy.
In some ways, this means that Facebook is agreeing with critics that it hasn’t adequately addressed such concerns, because it’s working to balance different goals, while it’s also learning as it goes in many respects. No company has ever been in Facebook’s situation before, serving over 2.7 billion users, in virtually every region of the world, and when you’re working to monitor the actions of so many people, in so many different places, with so many different concerns, inevitably, things are going to slip through the cracks.
But at that scale, when things do slip, the consequences, as we’ve seen, can be significant. And many also overlook, or are unaware of the impacts that Facebook has had in smaller Asian and African regions, where it’s also seen as a major influential factor in local politics, elections, unrest, etc.
Maybe now, however, with the scenes playing out on the doorstep of American democracy, with Senators locked in their offices to avoid the violence. Maybe now, there’ll be an increased push for change, and to implement more stringent rules around what action needs to be taken to stamp out concerning movements before they can take root.
Whether that comes from the platforms themselves, or via increased external regulation, the Capitol riots could be a turning point for social media more broadly.
Of course, there will always be those who seek to push the limits, no matter where those limits are set, and there will always be elements that ride the line, and could easily veer into more dangerous territory. But it seems clear now that something must be done, with the weaponization of social media posing major risks.
Will that spark a new debate around the limits of free speech, and the responsibility of big tech? It seems like now is the time to ask the big questions.