In the ongoing debate about the impact of misinformation shared online, and the role that social media, in particular, has played in spreading misinformation, a new anti-disinformation push in Europe could play a major role in improving detection and response across the world. Digital media platform.
As reported The Financial TimesMeta, Twitter, Google, Microsoft and TikTok are all planning to sign an updated version of the EU’s ‘anti-disinformation code’, which will see new requirements for dealing with misinformation, and the implementation of penalties.
Such as FT:
An updated “Disinformation Practice Code” will force technology platforms to disclose how they are removing, blocking or controlling harmful content in advertising and content promotion, according to a confidential report by the Financial Times. It should address “harmful misconceptions” that may include the suppression of propaganda, but also include the inclusion of “verification indicators” of independently verified information on issues such as the Ukraine war and the Kovid-19 epidemic. “
The push would see an expansion of tools currently used by social platforms to detect and remove misinformation, while it could form a new body to set rules for classifying it as ‘misinformation’ in this context, which could take some responsibility. These platforms shut themselves off.
While this will put more control in the hands of government-sanctioned groups to determine what is and is not ‘fake news’ – which we have seen in some areas, it can also be used to stifle public dissent.
Twitter was forced last year Block hundreds of accounts At the request of the Government of India for sharing users ‘Inflammatory’ remarks about Indian Prime Minister Narendra Modi. Most recently, Russia banned almost every non-local social media app for reporting on the Ukraine invasion, while the Chinese government also banned most Western social media platforms.
Implementing legislation to prevent misinformation is, by default, the responsibility of lawmakers to determine what falls under the ‘misinformation’ banner, which on the surface, in most areas, seems to be a positive step. But it can also be used in a negative, authoritarian way.
In addition, platforms need to provide a segmentation according to the country of their effort, as opposed to sharing global or Europe-wide data.
The new regulations will eventually be incorporated into the EU’s Digital Services Act, which will force platforms to take relative action, or face fines of up to 6% of their global turnover.
And although the treaty will deal exclusively with European countries, similar proposals have already been shared in other regions, Australian, Canadian And Government of the United Kingdom Everyone wants to implement new laws to force big technological measures to limit the distribution of fake news.
As such, this recent push points to a broader, international approach to online fake news and misinformation, which will ensure that digital platforms will be responsible for combating false reports in a timely, efficient manner.
Which, of course, made the video an overnight sensation. But again, the complexities around it can make the application difficult, which also points to the need for an overly regulatory approach to determining exactly what ‘fake news’ is and who can determine it on a larger scale.
It is one thing to refer to a ‘fact checker’, but really, in the face of the risk of abuse, there should be an official, objective body separate from the government, which can oversee such monitoring.
This too will be very difficult to implement. But again, the risks of allowing censorship by targeting electoral ‘misinformation’ could pose as significant a threat as false reports.