As 2025 draws to a close, India stands at a crossroads in its battle over digital rights and artificial intelligence. This year has seen intense parliamentary debates, the introduction of new laws, and significant contention among privacy advocates, tech companies, and the government. The central aim has been to propel India’s digital landscape forward while safeguarding the populace from the potential perils associated with AI.
The Data Protection Law
A pivotal development occurred in November 2025 with the implementation of the Digital Personal Data Protection (DPDP) Rules. This milestone follows the passage of the DPDP Act in 2023, which laid the legislative groundwork but left many questions regarding its operation. The newly established guidelines serve as a user manual, clarifying how organizations must handle personal information.
Under the DPDP Rules, companies are mandated to transparently disclose the details of the data they collect, including the purpose of its collection and the retention period. The language must be accessible, avoiding jargon in favor of clarity for the average citizen. Furthermore, explicit consent from individuals is required prior to data usage, moving away from the often-obscured consent clauses buried in lengthy terms and conditions.
In an essential measure for individual protection, organizations must notify users within 72 hours if their data is compromised in a security breach. While this timeframe is brief, it aims to enhance user security. The implementation of these rules will be phased, with initial norms effective from November 2025, while full compliance is not expected until May 2027.
The Deepfake Crisis
As data protection focuses on personal information, the issue of deepfakes presents a more sinister challenge involving manipulated audio and video content. In October 2025, the Indian government announced its first set of regulations targeting deepfakes, amending existing IT Rules to address this emerging threat. Scams involving deepfake technology have surged, with criminals creating counterfeit videos of celebrities soliciting money and manipulating voices to commit fraud.
To combat this, the new regulations require any creator of AI-generated content to mark it clearly, with the watermark occupying at least 10% of the screen and being non-removable. Platforms hosting this content, such as YouTube and Instagram, are obligated to remove deepfakes within 36 hours upon notification. However, the challenge remains in balancing the authenticity verification process without stifling legitimate creative expression, such as parodies or innovative works powered by AI.
Simultaneously, November 2025 saw the introduction of comprehensive AI Governance Guidelines. Rather than serving as strict legislation, these guidelines promote responsible AI usage in India, outlining seven core principles or “Sutras”: trust, people first, fairness, accountability, understandable design, safety, and innovation space. Additionally, the framework calls for the establishment of AI-specific institutions like an AI Safety Institute and governance bodies.
Notably, India refrained from crafting a singular, overarching law for AI. Instead, the government emphasized existing laws that address various aspects of AI, including the DPDP Act for data protection and the IT Act concerning cybercrimes, weaving them together to create a cohesive regulatory landscape.
The tech sector faced a tumultuous 2025 as businesses rushed to adapt to the new rules. Startups, e-commerce platforms, and social media companies grappled with the implications of compliance, with some expressing concerns over the rules being overly burdensome or ambiguous. On the flip side, tech leaders argued that robust data protection could enhance trust and secure India’s position as a favorable environment for business operations.
The Big Picture
The events of 2025 underscore India’s struggle to strike a balance between safeguarding citizen data and privacy, combating harmful deepfakes, and fostering an environment for technological innovation. These objectives often appear at odds with one another, yet the gradual rollout of the DPDP Rules, the pioneering deepfake regulations, and the AI Governance Guidelines indicate the government’s responsiveness to these conflicting priorities. As these measures enter parliamentary discussions, the urgency for firms to comply intensifies.
Civil society remains vigilant, ensuring that the newly established norms serve their intended purpose. By the end of 2025, India is poised to establish a fresh paradigm for its digital landscape, placing data security and AI regulation at the forefront. The effectiveness of these measures—whether they can compel compliance among companies, ensure citizen protection, and mitigate the threats posed by deepfakes—will unfold in 2026 and beyond.
See also
NCAI Consortium Launches Open-Source VAETKI AI Model for 28 Projects Across Key Industries
Chinese Local Governments Launch AI Bureaus to Enhance Innovation and Industry Growth
Texas Enacts New AI Regulations and Tax Hikes Post-Disaster Starting January 1
UK Government Unveils £1 Billion Plan to Ban Harmful AI Apps and Protect Youth
US Government Secures $250M Generative AI Deal with BigBear.ai and Ask Sage for Enhanced Operations




















































