I have to say I am favorably impressed by the results of Twitter’s experiment to encourage its users to read the full content of an article before retweeting it: the recommendation was introduced as part of selective tests last May, and now, four months later, the results indicate that users opened articles before sharing them 40% more often than they did without the prompt, although it’s impossible to know how fully they read content before taking the step to share it with their followers.
Twitter is a viral paradise. Its signal-to-noise ratio is unbeatable, so its users tend to believe that they can find out practically everything that’s going on in the world simply by following a sufficient number of accounts and by looking at the timeline they generate. However, the percentage of tweets that incorporate links to additional content has been growing over time, which means that the signal-to-noise ratio has become more complex, and that on many occasions, by simple force of habit, we tend to process that additional content simply from what we have in sight, usually only a headline and sometimes a short phrase and an image.
From a content creator’s point of view, it’s easy to see the effect: as soon as you share something you have just written, it is very easy to see people sharing it, surely with the best of intentions, long before the time they would have needed to read it and process it properly. We tend to justify this on the grounds that we trust the people we follow, but as a general principle, and especially taking into account that even the best writers get it wrong and that we live in times where many media distort headlines, the sensible thing to do is to read carefully anything you intend to share with others.
From the privileged vantage point of the people who manage Twitter, it must be very easy to see when people are sharing content they could not have had time to read properly from the timeline, and hence the decision to introduce this warning: people sharing content they have not read is obviously not the basis for a well-informed conversation.
The fact that a prompt was required to get users to open articles before sharing them should make us think: what do we do in social networks? Share what we think reinforces our points of view, even if we haven’t gone beyond the headline and the image that illustrates it? Do we share what we want people to think we have read? Or both? Is it any wonder that social networks have become a breeding ground for spreading rumors, fake news and campaigns of manipulation or misinformation? The Twitter experiment explains many things. All that remains is for us to decide to try to remedy them, starting, of course, with our own behavior.