Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

A new advisory by India’s IT ministry talks about strict regulations for AI platforms

Considering the focus on AI in recent months, concerns regarding the Potential misuse of AI for malicious purposes have also grown steadily – concerns that encompass the spread of misinformation, manipulation of public opinion, and even the creation of deepfakes. With that in mind and already surging use of AI for deepfakes in India, the country’s Ministry of Electronics and Information Technology (MeitY) issued a comprehensive Advisory on March 1, outlining a series of measures aimed at preventing the misuse of AI for generating misinformation and deepfakes, as well as promoting responsible development of this tech.

The advisory establishes a crucial safeguard by requiring “explicit government permission” before deploying “under-testing” or “unreliable” AI models, including large language models (LLMs), to Indian users, in order to prevent the release of untested or potentially harmful AI models into the public domain. The current advisory is not legally binding, even though it can be counted on as as a significant step towards establishing a comprehensive regulatory framework for AI in India.

“Generative AI or AI platforms available on the internet will have to take full responsibility for what the platform does, and cannot escape the accountability by saying that their platform is under-testing,” Rajeev Chandrasekhar, Minister of State for electronics, commented on the matter.

This advisory comes amidst growing concerns regarding the Potential Misuse of AI in manipulating public opinion and disseminating false information, particularly during sensitive events like elections. Thus, the new advisory mandates that all platforms utilizing AI, especially generative AI, must clearly label any synthetically generated content with a unique identifier or metadata. This step aims to enhance transparency by allowing users to easily identify the source of the information and assess its potential biases or limitations.

And if this is not enough. the advisory also emphasizes the importance of collaboration between the government, AI developers, and users. It mandates that all platforms submit an “Action Taken-cum-Status Report” to MeitY within a period of 15 days, outlining the steps they have undertaken to comply with the advisory.

Furthermore, the advisory emphasizes the need for responsible development and usage. It mandates that all intermediaries and platforms employing AI tools must ensure that their systems do not promote bias, discrimination, or threats to the integrity of the electoral process. Additionally, platforms are required to clearly label the potential fallibility or unreliability of AI-generated outputs through mechanisms like “consent popups,” informing users about the limitations of the technology and fostering responsible consumption of AI-generated content.

“The platforms should figure out a way of embedding a metadata or some sort of identifier for every thing that is synthetically created by their platform,” Chandrasekhar said. With the newest advisory, companies operating in the AI space must ensure that their platforms adhere to transparency, accountability, and fairness standards outlined in the advisory. Failure to comply with these regulations could result in legal repercussions and damage to the reputation of the companies involved. Chandrasekhar noted that this could include prosecution under the IT Act and other relevant criminal statutes.

Content originally published on The Tech Portal - Global technology news, latest gadget news and breaking tech news.



This post first appeared on The Tech Portal, please read the originial post: here

Share the post

A new advisory by India’s IT ministry talks about strict regulations for AI platforms

×

Subscribe to The Tech Portal

Get updates delivered right to your inbox!

Thank you for your subscription

×