Georgetown University Discusses The Great Deplatforming: Removing Trump From Social Media



Shortly after the attack on the U.S. Capitol on January 6, 2021, a number of social media companies, starting with Twitter, kicked President Donald Trump off of their platforms. Following Twitter, Facebook banned Trump from its platforms including Instagram, and then other social media companies, including YouTube followed suit. Within a few days, Trump found himself with no digital means of getting his word out.

Of course he wasn’t muted – as President he could call a press conference any time he wished, and he could be confident that the media would attend. But like many politicians who had learned about the power of unfiltered access to the internet, Trump had grown used to being able to have his say whenever he wanted it. Now, through the actions of a couple of large companies, he couldn’t get his word out in the way he wished.

This raised questions in many circles at the time, and those questions haven’t gone away in the time since. What are the implications of silencing a sitting President? Is it legal or ethical to shut off access to those platforms?


To answer that question, the Georgetown University Law Center held a panel discussion with four of its top legal experts to examine the question of “deplatforming” a sitting President.

The first question, whether it’s legal for the companies to remove a sitting President from a publicly available social media platform such as Twitter or Facebook, was answered at the beginning of the panel discussion by moderator Hillary Brill, acting director of Georgetown Law’s Institute for Technology Law and Policy. Brill noted the outcry by many when it happened that it was somehow a limitation on free speech, noting that the First Amendment to the U.S. Constitution only protects free speech against limitations from the government, She noted that Twitter, Facebook and the other companies were private companies. There is no First Amendment issue regarding their actions to block the President from posting on their services.

But that didn’t mean there weren’t concerns. Professor Erin Carrol, who teaches on communications and technology and the press, said that she was concerned about the power of big tech and the lack of transparency. “When you clear away disinformation, will there be truth behind it?” she asked.

Unfortunately, there may not be. Carroll pointed that when Trump and his sympathizers were booted from the mainstream social media, they moved to other platforms, such as Telegram and Signal, which are message services where law enforcement has little access, and Gab, which makes little attempt to control the content of messages. Another social media site, Parler, initially benefitted as a sort of home away from home for refugees from Twitter, but sponsors didn’t like its lack of moderation, and Amazon, which was hosting the service, refused to carry it. That effectively killed Parler.

Speech doesn’t go away

“Speech doesn’t go away,” Carroll said, “it just finds other places.”

According to Professor David Vladeck, the A.B. Chettle Chair in Civil Procedure at Georgetown Law and former head of the FTC Consumer Protection Bureau, much of the issue about removing someone such as Trump from a platform is rooted in Section 230 of the Communications Decency Act. He said that Section 230 enables a lot of the problems. “It gives very broad immunity for publishing harmful or defamatory information.” He said that while he doubts that Section 230, which protects internet providers from liability for material that others post on their sites, will be repealed, he thinks it’s likely to be changed. He noted that former President Trump’s desire to repeal that section was based in his lack of understanding of what it did. In effect, he said, it would have allowed much greater control over what he posted, rather than less.

That then raised the question of just how online content should be controlled. Professor Anupam Chander, who teaches communications and technology law, suggested that changing Section 230 to bring more content moderation might not be a good thing. “It could lead to a ‘Disneyfied’ universe,” he said. That would be one in which no negative information exists.

Transparency needed

Instead, Carroll said that what’s needed is for the industry to adopt greater transparency in how it makes decisions. She said that when new rules, such as a revision of Section 230 are done, it needs to be done by people who understand it, and who understand the way online services such as Twitter and Facebook work.

“How do we have policies that promote facts versus propaganda,” Carroll asked. She suggested that there needs to be some accountability into who makes decisions such as deplatforming a President.

So far, however, there seems to be no obvious answer to the question about when or if to remove such a platform from the President (or anyone else for that matter). But it appeared clear that the first step should be to update current legislation to at least reflect how these services work, and to make sure that there’s transparency/


Source link