India’s Minister of Information Technology, Ashwini Vaishnaw, emphasized the need for a comprehensive techno-legal approach to combat AI-generated harmful content during the ongoing IndiaAI Impact Summit 2026. Speaking to reporters, Vaishnaw highlighted a growing global consensus on the responsible use of artificial intelligence, noting that stronger regulations are essential to address issues like deepfakes. “A good consensus is emerging among global leaders. Everybody believes that AI should be used for good, and all harmful impacts must be contained,” he stated.
Vaishnaw underscored that the challenge of countering AI misuse cannot be resolved solely through legislation. “It has to be done through a techno-legal approach and cannot be done through passing a law. It has to be done through a technological approach where technology can be used in a safe way,” he explained. To facilitate this, India has established the IndiaAI Safety Institute (AISI), which collaborates with various academic institutions to develop technical solutions aimed at mitigating the risks associated with AI technologies.
As concerns surrounding deepfakes escalate, Vaishnaw reiterated the call for stronger regulations. “I think we need a stronger regulation on deepfakes. It is a problem growing day by day. We need to protect our society from this harm,” he told reporters. Initiating a dialogue with industry stakeholders, he mentioned that the IT Committee of the Parliament has examined this issue, leading to several recommendations aimed at bolstering the regulatory framework. “Certainly, I believe that we need much stronger regulation of deepfakes,” he added, expressing the urgency of achieving consensus within Parliament for more stringent measures.
The issue of AI-generated content regulation gained momentum recently, as the Indian government announced new measures formalizing oversight over technologies such as deepfake videos and synthetic audio. This comes in the wake of amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, which were notified via gazette on February 20. These amendments mandate that platforms like YouTube, Meta-owned Instagram, Facebook, and X (formerly Twitter) clearly label all synthetically generated information, ensuring users can easily identify such content.
The regulations also require these platforms to implement automated verification tools designed to assess the format, source, and nature of the content before it is published. This move aims to enhance transparency and user safety in the digital landscape, particularly as the influence of AI technologies continues to grow.
Vaishnaw also addressed the necessity of age-based content regulations to safeguard younger audiences. He stated that the government is committed to establishing guidelines that differentiate content accessibility based on the age of the users. “We have already created this age-based differentiation on the content, which is accessible to students and young people,” he affirmed.
The ongoing discussions and regulatory updates signal India’s commitment to ensuring that technological advancements in AI are aligned with ethical standards and societal safety. As AI continues to evolve, the balance between innovation and oversight will remain critical. The government’s proactive stance in establishing a regulatory framework could serve as a model for other nations grappling with similar challenges posed by rapidly advancing AI technologies.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health



















































