New Delhi — The Indian government has concluded industry consultations regarding the proposed mandatory labelling of AI-generated content, with IT Secretary S. Krishnan indicating that the related rules will be released shortly. In a recent interview with PTI, Krishnan noted that the industry has shown a “fairly responsible” attitude toward the labelling initiative, understanding the rationale behind the move and not presenting significant opposition.
The primary feedback from industry stakeholders has centered on the need for clarity regarding how to differentiate between substantive changes made through AI and routine technical enhancements. “Based on the inputs we have received, we are just consulting the other ministries within government, saying that these are the changes which have been suggested,” Krishnan said. He added that the government is currently assessing which adjustments to accept and how to implement them, with a commitment to releasing the new rules soon.
Krishnan emphasized that the government’s request is not for industry players to register their content with a third-party entity or to impose any restrictions. “All that is being asked is to label the content,” he stated, asserting that citizens deserve to know whether a piece of content is AI-generated or authentic. He pointed out that even minor AI edits could significantly alter meanings, while technical enhancements—such as improvements to a smartphone’s camera—may enhance quality without changing the facts.
“Most of the reaction is about the degree and kind of change,” he explained. “Now advanced technology is such that there is some modification or the other in some sense. In some cases, modification can be very small, but that, in itself, can make a difference.” He highlighted the potential for even a single word change to dramatically affect the outcome of a conversation or piece of media.
Krishnan acknowledged the challenges in delineating between different types of modifications. “Because, as I pointed out, even one or two words changing in a particular sequence of conversation could have a completely different effect and impact,” he said. He reiterated that the government is not opposed to creativity, but underscored the importance of transparency in distinguishing genuine content from AI-generated material.
In October, the government proposed amendments to IT rules mandating the clear labelling of AI-generated content. This move aims to increase the accountability of major platforms such as Facebook and YouTube for verifying and flagging synthetic information. The IT ministry has noted the rise of deepfake audio, video, and synthetic media on social platforms, underscoring the potential for generative AI to create “convincing falsehoods” that could be manipulated to spread misinformation, damage reputations, influence elections, or commit financial fraud.
The proposed amendments seek to establish a clear legal framework for labelling, traceability, and accountability concerning synthetically-generated information. The ministry had invited comments from stakeholders on the draft amendment, which mandates labelling, visibility, and metadata embedding for AI-generated or modified content, helping to differentiate such material from authentic media.
The draft rules call for platforms to label AI-generated content with prominent markers, covering at least 10 percent of the visual display or the initial 10 percent of an audio clip’s duration. As these developments unfold, they highlight the increasing need for transparency and accountability in the rapidly evolving landscape of AI-generated content.
See also
AI Framework DNA-Diffusion Enables Synthetic Regulatory Element Design with High Precision
DuckDuckGo Launches AI Image Generator with Enhanced Privacy Features
Intel Launches $300 B50 GPU Offering Solid Performance for AI and 3D Tasks in 2025
AI Study Reveals Image Generation Converges to 12 Styles After 100 Cycles



















































