24 x 7 World News

parliament: How to check fake news, misinformation on social media: Key report suggests transparency law, regulator answerable to parliament | India News

0
NEW DELHI: Organised misinformation with commercial or political motives is rife on social media platforms like Twitter, Facebook and others but content moderation policies of these tech giants are more of public relations exercises than genuine attempts to curb manufactured information, a key report by an important think tank has found.
The Future of India Foundation, in its report titled ‘Politics of Disinformation’, has mooted a series of steps, including a law that ensures transparency by social media platforms and also a regulator under the oversight of Parliament, to check the menace of fake news and misinformation.
Cheap internet costs and proliferation of mobile phones to all segments of the population have led to social media platforms becoming the dominant players in providing information to the people, the report said.
“This ubiquity could have been a golden moment for India – democratising access to information, fostering community, increasing citizen participation and reducing the distance between ordinary people and decision-makers,” the report added.
Tech giants, however, have made design choices that lead to mainstreaming of misinformation while allowing themselves
to be weaponised by powerful vested interests for political and commercial benefit, it said.
“The consequent free flow of organised misinformation (disinformation), hate and targeted intimidation has led to real world harm and degradation of democracy in India: anti-minority hate has been mainstreamed and legitimised; communities have become divided and polarised,” it added.
A survey conducted by Future of India Foundation showed that while youngsters absorbed a lot of information from social media platforms, it often confused them rather than enlighten them about issues and events.
Social medial companies were also not immune to pressure from the powerful, including influencers, media, politicians and governments, the report said.
“Platforms have been known to take down or block content (including critical political speech) based on government requests while also making exceptions for powerful users who are at times linked to the government and its affiliates. The underlying principle in many instances has been linked less to adherence to content moderation rules and more to business considerations and/or bad PR,” the report added.
No distinction between the credible and the doubtful
A key point raised in the report was that social media platforms had blurred the distinction between different sources of information which had removed an important signal of credibility.
Instead, engagement was perceived to be a bigger driver of the importance – and by extension – credibility of a piece of news.
“This equal treatment (appearance and placement of different and unequal sources of information) and making virality instead of quality the primary determinant of a sourceтАЩs credibility and/or a piece of contentтАЩs importance has eroded the distinction between vetted information, propaganda and misinformation in the minds of the user,” the report said.
Free speech a business model for social media platforms?
The report said content moderation decisions by platforms were often ad-hoc and driven by external pressure – especially government, media, PR – instead of coherent speech policies.
Most importantly, platforms opportunistically used the laudable principle of тАЬfree speechтАЭ and the protection against liability for intermediaries to advance their business models while failing to ensure a good information ecosystem, the report said.
“Traditional news media is liable for published content and must thus invest time and resources to vet information before publishing. Platforms compete with traditional news publishers for advertising revenue while enjoying the double advantage of speed (to get content to users) and protection from liability (for unvetted content),” it further said.
“Since, advertising revenue is directly proportional to the amount of time users spend on the platforms, platforms have exploited this twin advantage to boost user engagement without caring about the deleterious impact of a surfeit of misinformation on the information ecosystem and wider democracy. It can thus be argued that for social media platforms, ‘free speech’ is a business model instead of a principled imperative,” the report added.
Cures for the malaise
The report said lack of transparency by social media platforms was one of the major hurdles to check wrong or motivated information.
“Even when platforms have disclosed certain kinds of information (e.g., the Ad Library by Facebook), the data is often not presented in a manner which facilitates easy analysis and prompt response. At the same time, there are reasonable privacy concerns about certain kinds of data sharing,” it said.
India must enact its own comprehensive transparency law to ensure parity and relevance for India, it added.
The report also said giving control over the public discourse to a handful of individuals heading technology companies lacked both
transparency and democratic legitimacy.
However, given the highly dynamic and ‘of the moment’ nature of this issue, any legislative route would rapidly become outdated. At the same time, bringing governance of speech under state purview was fraught with risks to free speech and democratic dissent, the report said.
“A way forward could be to constitute a statutory regulator under parliamentary oversight. The regulator would have statutory powers to lay out broad processes for governance of speech, set standards for transparency for social media platforms (within the framework of the law referenced in the point above) and audit social media platforms for compliance; and advisory powers to develop a point of view on key misinformation themes/events in the country,” it added.
To ensure free media did not come under government control, the reports said the regulator would be answerable to Parliament and not the executive.
Among other suggestions, the report said social media platforms should amplify content from vetted producers.
The report also suggested that platforms should directly label the source of information on trustworthiness instead of placing labels on individual pieces of content.
“Under this proposal, users who repeatedly post borderline content, or lift original content, or post content fact-checked to be false by
independent third-party fact checkers and/or other reputation and credibility related research will be labeled as тАШLow Credibility SourceтАЩ in addition to the content itself being labeled as false,” the report said.

Leave a Reply