India’s digital public sphere is experiencing a transformative moment, driven by advancements in artificial intelligence (AI) that are reshaping how information is generated, consumed, and trusted. The Indian government has recognized both the opportunities and risks posed by these technologies, particularly concerning individual dignity and social harmony. In response, it has strengthened the legal and policy framework governing digital intermediaries.
Recent amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, alongside the India AI Governance Guidelines, 2025, released in November 2025 under the IndiaAI Mission, reflect a cohesive approach. This framework imposes binding legal obligations aimed at addressing specific harms while outlining policy principles to guide responsible AI adoption.
The amended guidelines introduce a precise definition of “synthetically generated information,” which encompasses content that is artificially or algorithmically created or altered to appear authentic. Importantly, this definition excludes routine activities like technical editing and educational material, thereby ensuring clarity in legal obligations. By incorporating synthetically generated information into the definition of “information,” the rules facilitate due diligence, grievance redressal, and intermediary responsibility, ensuring that emerging forms of digital harm are not relegated to informal moderation.
Another significant aspect of the revised framework is the clear delineation that good-faith actions taken by intermediaries, whether through automated tools or other reasonable means, do not undermine statutory protections. This fosters a compliance-enabling environment while maintaining accountability. The framework notably shifts from reactive moderation to proactive governance, mandating intermediaries to implement reasonable measures to prevent the circulation of unlawful content during its creation or dissemination.
For lawful synthetically generated content, the rules require clear and prominent labeling, supported by persistent metadata or provenance mechanisms, wherever technically feasible. The modification or removal of such identifiers is explicitly prohibited, reinforcing a culture of transparency. This techno-legal approach emphasizes that transparency serves as a safeguard of dignity and trust, empowering citizens to distinguish between authentic and synthetic content in real-time.
The amended guidelines also require intermediaries to periodically inform users of their rights, obligations, and the consequences of non-compliance in accessible language. For large social media intermediaries, the framework imposes heightened responsibilities, including obtaining user declarations on synthetically generated information and deploying technical measures to assess the accuracy of these declarations.
Non-compliance with these requirements may constitute a lapse in due diligence, triggering statutory consequences. This reflects a well-calibrated allocation of responsibility, recognizing that platforms with broader societal impact must shoulder greater governance obligations. The India AI Governance Guidelines, 2025, further articulate a policy framework for responsible AI adoption, emphasizing transparency, accountability, and human-centric design while operating within existing legal boundaries.
India’s approach to governing emerging technologies is grounded in constitutional legitimacy and institutional capacity. While the current framework relies on carefully crafted rules and policy guidance, the Indian Parliament retains the authority to adapt legislation to evolving technological realities in the public interest. This strategy is not an assertion of regulatory excess but rather a reaffirmation of democratic stewardship, ensuring innovation aligns with constitutional values and individual dignity.
The evolving framework for synthetic media exemplifies India’s principle-based governance, which favors extensive consultation and proportionality over rigid responses. By combining definitional precision, proactive safeguards, mandatory transparency, and effective oversight, India aims to bolster confidence in its digital public sphere.
The challenge posed by synthetic media ultimately revolves around trust: ensuring that technology does not outpace rights, that platforms remain accountable, and that institutions respond effectively to citizen concerns. By empowering users through enforceable procedures and embedding transparency by design, India sets a foundation for a resilient digital democracy. This response not only provides a governance model for domestic challenges but also positions India as a reference point for democratic, rights-respecting regulation in an AI-mediated world.
S Krishnan is the secretary of the Ministry of Electronics and Information Technology (MeitY). The views expressed are personal.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health





















































