The Ministry of Electronics and Information Technology (MeitY) has announced amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, requiring social media platforms to prominently label AI-generated content. The mandate, which takes effect on February 20, 2026, aims to enhance transparency regarding the authenticity of digital content and curb the spread of misinformation, such as deepfakes. Along with the labeling requirement, the amendments tighten takedown timelines for a variety of content, reducing the window for action from 24-36 hours to just two to three hours.
This move comes as part of a broader effort to address concerns surrounding “synthetically generated” content, which includes AI-generated images and videos. Platforms with over five million users will now have to verify user declarations regarding AI-generated content and implement necessary technical measures before its publication. MeitY has highlighted the importance of preventing the dissemination of misleading information that could harm users or threaten national integrity.
The final rules were released following a draft proposal made in October, where the definition of “Synthetically Generated Information” (SGI) was broader, encompassing any audiovisual content that had been AI-modified or generated. However, the final amendments exempt certain types of content from the labeling requirement, such as smartphone photos that have been automatically retouched by camera applications and special effects used in films. The regulations do outline prohibitions against specific harmful types of SGI, including child sexual exploitation material and deepfakes that misrepresent real individuals.
To enforce these new guidelines, the government has called on major platforms to deploy reasonable technical measures to detect unlawful SGI. A senior official at MeitY emphasized that large platforms already possess sophisticated tools for this purpose. The government is also encouraging collaboration among AI firms and platforms, which are part of the Coalition for Content Provenance and Authenticity (C2PA), to create standards for invisibly labeling AI-generated content that can be recognized across platforms.
The amendments also introduce significant changes to the timelines for content takedown requests. Previously, takedown notices issued by government authorities and police officials had a response time of 24-36 hours, but this has now been compressed to two to three hours. For complaints made by users regarding categories of content deemed illegal, such as misinformation and threats to sovereignty, response times have been reduced from two weeks to one week. This swift action is aimed at mitigating potential harm that could arise during longer response periods.
Moreover, platforms are now required to remind users of their terms and conditions more frequently. The amendments stipulate that notifications must be issued at least once every three months, rather than just once a year. These reminders will clarify the potential consequences of non-compliance and outline users’ reporting obligations. Platforms must also provide warnings about the risks associated with harmful deepfakes and illegal AI-generated content, which may expose users to legal repercussions, including the possibility of identity disclosure to law enforcement and the potential suspension or termination of accounts.
As the digital landscape continues to evolve with advancements in AI technology, these regulations reflect a proactive approach by the Indian government to govern content authenticity and user safety. The forthcoming implementation of these rules underscores the urgent need for effective oversight in the age of digital misinformation. Stakeholders in the tech industry will need to adapt quickly to these requirements, ensuring compliance while maintaining user trust in their platforms.
See also
AI Technology Enhances Road Safety in U.S. Cities
China Enforces New Rules Mandating Labeling of AI-Generated Content Starting Next Year
AI-Generated Video of Indian Army Official Criticizing Modi’s Policies Debunked as Fake
JobSphere Launches AI Career Assistant, Reducing Costs by 89% with Multilingual Support
Australia Mandates AI Training for 185,000 Public Servants to Enhance Service Delivery




















































