According to data from Russia's media watchdog Roskomnadzor, access to Twitter has been restricted in…
Meta, TikTok, Google, and Twitter are all getting ready to sign on to new European misinformation rules.
A new anti-disinformation campaign in Europe might play a key role in enhancing detection and reaction across the biggest digital media platforms, amid continuous debate over the impact of misinformation published online, and the role that social media, in particular, plays in the dissemination of false narratives.
Meta, Twitter, Google, Microsoft, and TikTok are all planning to sign on to a revamped version of the EU’s ‘anti-disinformation code,’ according to The Financial Times, which will see the application of new standards and penalties in dealing with misinformation.
According to the Financial Times:
“A new “code of practice on misinformation,” according to a private report reviewed by the Financial Times, will oblige internet platforms to reveal how they’re removing, blocking, or restricting harmful content in advertising and content promotion. Online platforms will have to combat “harmful misinformation” by developing tools and forming partnerships with fact-checkers, which could include removing propaganda as well as including “indicators of trustworthiness” on independently verified information on issues like the Ukraine war and the COVID-19 pandemic.
The initiative would see an expansion of the tools currently used by social platforms to detect and remove misinformation, as well as the formation of a new body to set rules around what constitutes “misinformation” in this context, potentially removing some of the burdens from the platforms themselves.
Though this would give government-approved entities more power over determining what is and isn’t ‘fake news,’ which, as we’ve seen in some countries, may also be used to suppress public opposition.
Due to users making “inflammatory” statements about Indian Prime Minister Narendra Modi, Twitter was obliged to deactivate hundreds of accounts at the behest of the Indian government last year. Recently, Russia has blocked practically all non-local social media apps for spreading the news about the invasion of Ukraine, and the Chinese government has imposed similar restrictions on most western social media platforms.
By default, enacting laws to combat misinformation puts lawmakers in charge of selecting what goes under the misinformation banner, which, on the surface, appears to be a positive step in most regions. It can, however, be employed in a bad, dictatorial manner.
Furthermore, rather than sharing worldwide or European-wide data on such activities, the platforms would be compelled to publish a country-by-country analysis of their efforts.
The new rules will eventually be put into the EU’s Digital Services Act, obliging platforms to take appropriate action or face fines of up to 6% of their global sales.
While this agreement would only apply to European countries, similar recommendations have already been made in other regions, with the Australian, Canadian, and United Kingdom governments all attempting to enact new legislation to force big companies to take action to curb the spread of fake news.
As a result, this recent drive is expected to signal a broader, global approach to online fake news and misinformation, in which digital platforms will be held accountable for battling misleading reports in a fast and effective manner.
This is excellent because most people would agree that disinformation has harmed people in numerous ways in recent years. However, the complications surrounding such can make enforcement difficult, highlighting the need for an overall regulatory strategy to decide what constitutes ‘fake news,’ and who has the authority to do so on a broad scale.
It’s one thing to refer to ‘fact-checkers,’ but given the dangers of misapplication, there should be an established, objective agency, separate from the government, that can provide oversight.
That, too, will be extremely difficult to put in place. However, the dangers of enabling censorship through selective ‘misinformation’ targeting might be just as dangerous as false reports themselves.