With just 25 days to go until the US Presidential election, Twitter has announced a range of new measures designed to stop the spread of misinformation, including significant design changes that have been built into the tweet process, which should prompt users to think twice before amplifying certain messages.
The major, functional change is that, from October 20th ‘through at least the end of Election week’, whenever a user in the US taps on the retweet option, on any tweet, that will open the ‘Quote Tweet’ composer by default, as opposed to giving users the option to choose between a ‘Retweet’ or ‘ Quote Tweet’.
As you can see here, the new sequence removes the pop-up option where you can choose to simply retweet a message.
You can still retweet as normal by not entering any text in the composer, but the revised design will ideally prompt more people to add their own thoughts, and/or re-assess exactly why it is that they’re retweeting each message.
Twitter announced the change by tagging onto a trending meme format:
The process was spotted in testing by reverse engineering expert Jane Manchun Wong earlier in the week, and as noted, it could add an extra, important level of friction in the tweet amplification process, which may prove effective in getting users to consider their actions more carefully.
A good example of this is Twitter’s recently introduced ‘read before retweet’ prompt, which calls on users to open a link before they share it.
That extra step has already had a significant impact, with Twitter reporting that users open articles 40% more often when shown these prompts.
Removing the straight retweet option may not seem like a major shift, and it may not be as up front as these prompts. But the results here do show that such pushes can be effective in altering user behaviors.
Some users will begin seeing this new process on the web version of Twitter from today.
In addition to this, and also running from October 20th till whenever Twitter deems necessary, Twitter will also put a halt on all “liked by” and “followed by” recommendations from people that you follow appearing in your main feed.
As explained by Twitter:
“These recommendations can be a helpful way for people to see relevant conversations from outside of their network, but we are removing them because we don’t believe the “Like” button provides sufficient, thoughtful consideration prior to amplifying Tweets to people who don’t follow the author of the Tweet, or the relevant topic that the Tweet is about.”
This makes a lot of sense – the addition of tweets liked by those you follow within your feed is generally not overly beneficial either way, and it also makes tweet likes, essentially, random retweets, which may mean that the person who liked the tweet ends up sharing it among his/her followers unintentionally.
People ‘Like’ tweets for different reasons – sometimes to indicate agreement, sometimes to tag something to read later, etc. As such, the mechanism which re-shares your likes is not ideal either way, and at a time where Twitter is working to promote more thoughtful sharing, removing unintended amplification seems like an obvious step.
Twitter’s also narrowing down its Trends to only display those which include additional context.
Early last month, Twitter announced a new effort to include more context within its Trends listings, by providing a short explainer or an example tweet on each, which makes it clearer why, exactly, a term or entity might be trending at any given time.
For the election period, this will now be the default – which is good, because there are still many instances where, say, a celebrity’s name will be trending, and you have that moment where your hart skips a beat in fear for their life, or a random word will show up, like earlier this week, when ‘Fly’ briefly appeared on my Trends list.
As you can see in this example, all the trends listed on Twitter ‘For You’ discovery page will now include added context. Again, this will be in place for US users from October 20th till whenever Twitter deems fit.
In addition to this, Twitter has also added some new rules around election-related content, particularly in regards to claims of victory by candidates and voter intimidation at polls.
On election outcomes, Twitter says that it will not allow users, including candidates, to claim victory on its platform until the outcome is officially announced.
“To determine the results of an election in the US, we require either an announcement from state election officials, or a public projection from at least two authoritative, national news outlets that make independent election calls. Tweets which include premature claims will be labeled and direct people to our official US election page.”
So, Twitter will still leave these claims up, it will just label them, and direct users to official updates. Facebook has taken a similar approach, though it also plans to display top of feed announcements on the progress of vote counts across both Facebook and Instagram, taking it a step further.
With US President Donald Trump repeatedly refusing to assure a peaceful transfer of power in the case of him losing the vote, there’s a real concern that he, or others, could use the mass-reach of social media to falsely claim victory. That could lead to a difficult situation, even civil unrest, if there’s a dispute over the official counts, and none of the major social platforms want to play any part in facilitating such.
Really, Twitter should just remove any such claims, but by referring people to official information, it should negate such impacts, while still enabling users to see what candidates are claiming.
That said, Twitter will remove threats of intimidation at polls, or efforts to dissuade people from voting:
“Tweets meant to incite interference with the election process or with the implementation of election results, such as through violent action, will be subject to removal. This covers all Congressional races and the Presidential Election.”
So Twitter could remove all false claims and threats. Either way, it’s taking a tougher stand on such from now on in.
But that’s not all – in addition to this, Twitter will now also add another new prompt to stop people sharing any tweets that have been tagged as including misleading information.
“Starting next week, when people attempt to Retweet one of these Tweets with a misleading information label, they will see a prompt pointing them to credible information about the topic before they are able to amplify it.”
Again, this is another level of friction designed to prompt a re-think before a user amplifies such messages, and Twitter’s also adding new warnings and restrictions on any Tweets which have been slapped with a misleading information label from:
- US political figures
- US-based accounts with more than 100,000 followers
- Profiles that see significant engagement.
“People must tap through a warning to see these Tweets, and then will only be able to Quote Tweet; likes, Retweets and replies will be turned off, and these Tweets won’t be algorithmically recommended by Twitter.”
This is a key element, because high-profile users are the ones who are able to add significant credibility and amplification to false and misleading claims.
This was the case late last year, when during the UK election campaign, an image of a child forced to lay on the floor in an overcrowded hospital was circulated on social media. Unfounded rumors suggested that the image was faked, and those claims were then boosted by several celebrities, rapidly escalating the issue, and turning into a far more divisive, aggressive point.
The case highlights the credence that high profile users can unwittingly lend to such campaigns, getting them in front of many more users. In the US, actor Woody Harrelson has shared COVID-19 conspiracy theories, including the suggestion that 5G may be facilitating the virus’ spread. Harrelson has more than 2.2 million followers on Instagram alone.
Limiting questionable claims from these users makes a lot of sense with respect to slowing viral spread.
These are some significant, important measures from Twitter, which could go a long way towards addressing key election concerns, and potential misuse of its platform. Of course, Facebook is still seen as the key focus in this respect, but Twitter too has been a focus of various investigations into the rapid spread of misinformation, and it remains the social media platform of choice for President Trump.
But then again, most of these past investigations have highlighted Twitter bot armies as the key tool for boosting misinformation via tweet. Twitter has made efforts to address this – back in April, the platform removed 20,000 fake accounts linked to the governments of Serbia, Saudi Arabia, Egypt, Honduras and Indonesia as part of its ongoing efforts to combat misuse, while it’s also questioned the validity of such studies in measuring the impact of bot activity on its network.
Seemingly, Twitter has addressed at least some elements in this respect. How effective those measures have been, we’ll have to wait and see, but bots still loom as a significant amplification concern amid these other preventative measures.