New Delhi: The Indian government has reinforced its regulatory framework to address the risks associated with AI-generated content and deepfakes, enhancing compliance under the Information Technology Act and associated rules. Minister of State Jitin Prasada informed the Lok Sabha that a series of advisories and amendments have been introduced to guide intermediaries in managing unlawful digital content. These measures are designed to ensure a safe and accountable cyberspace in India.
The government has issued multiple advisories targeting social media platforms and intermediaries, emphasizing strict adherence to due diligence obligations set forth in the Information Technology Act, 2000, and the IT Rules, 2021. These advisories notably address risks linked to synthetically generated information (SGI), including deepfakes, and call for platforms to take proactive steps to prevent the creation and dissemination of misleading or harmful AI-generated content.
In a significant move, the government amended the IT Rules in February 2026 to counter emerging threats posed by digital innovations. Consequently, platforms are now mandated to implement technical measures aimed at detecting and restricting unlawful AI-generated content. This updated framework seeks to foster a more secure online environment.
Intermediaries are now required to clearly label AI-generated content, enhancing transparency and helping users distinguish between synthetic media and genuine material. Additionally, platforms must maintain traceable metadata for such content, enabling authorities to track misuse and enforce accountability more effectively.
To ensure rapid response to illegal content, the revised regulations stipulate that social media platforms must remove unlawful material within three hours of receiving valid orders from courts or government agencies. This swift action is designed to minimize the potential harm caused by such content.
The framework also reinforces protections against severe harms, addressing issues such as child sexual exploitation material, non-consensual imagery, and impersonation enabled by AI tools. This comprehensive approach is intended to bolster user safety while holding platforms accountable for their responsibilities.
In addition, the government has introduced a Standard Operating Procedure (SOP) to combat non-consensual intimate imagery online. This SOP outlines concrete guidelines for victims, social media platforms, and law enforcement agencies, aiming to facilitate more effective responses to such incidents. Intermediaries are also tasked with educating users about the legal consequences of sharing unlawful content, fostering a culture of responsible online behavior.
Altogether, the enhanced regulatory framework surrounding AI content and digital governance in India reflects a proactive stance towards emerging risks in the digital ecosystem. It seeks to balance innovation with accountability, addressing the challenges posed by rapid technological advancements while prioritizing user safety.
The ongoing developments in India’s regulatory landscape illustrate the government’s commitment to creating a secure digital environment. As AI technologies continue to evolve, the measures implemented will likely serve as a template for other nations grappling with similar challenges. The focus on clear labeling, swift action against illegal content, and user education marks a significant shift towards more responsible digital content management.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health


















































