AI Governance in India
Global CEOs and technology leaders are set to gather in New Delhi for the AI Impact Summit, which will attract tens of thousands of attendees. Investors, technology firms, and governments are increasingly viewing India as a burgeoning centre for artificial intelligence (AI), a site to test scalability, accelerate talent development, and establish AI frameworks in the Global South. However, India’s ascent in the AI space will depend not only on the business it attracts or the high-profile figures on its discussion panels but also on its willingness to set regulatory standards alongside creating opportunities.
Investors are not just seeking innovative strategies for growth; they also desire assurances that responsible governance will accompany new technologies. This demand is particularly crucial in markets traditionally exposed to the adverse effects of unregulated technology implementations. If the Global South is to play a pivotal role in shaping the future of AI, it must lead in establishing respect for user rights and accountability from the beginning.
India has a unique opportunity to exemplify that responsible AI development is not a hindrance to growth but rather a competitive advantage. Companies looking to operate in India should anticipate a regulatory framework that embraces both innovation and responsibility. The Indian government has proposed seven principles for responsible AI, advocating for an independent oversight model as the most effective form of regulation. The way India defines its approach will likely influence AI industry standards and regulations beyond its borders.
As a member of the Oversight Board, I can assert that this body serves as a vital mechanism for ensuring human rights oversight over Big Tech’s decisions. It independently addresses some of Meta’s most contentious content issues through a consistent framework rooted in human rights principles. This not only provides redress for users but also promotes accountability and transparency, which are beneficial for businesses that advertise on Meta’s platforms and for investors prioritizing responsible investments.
The Oversight Board has defended free expression while determining when it is permissible to impose limits to prevent real-world harm, including violence. It has influenced Meta’s policies and processes in significant ways—independent from government interference or commercial interests—by making regulations available in multiple languages and ensuring users are informed about the standards they may have violated before any punitive actions are taken.
Independent oversight can also be applied to AI governance, as the Board has already ruled on AI-generated content and automated moderation. In a notable case involving manipulated images of President Joe Biden, the Board clarified that AI generation or modification of images should not inherently justify removal, as this could restrict free speech. However, labeling content as fake can help mitigate potential harm by alerting users that what they are viewing is not genuine.
In circumstances where harm must be prevented, content has been ordered removed from Meta’s platforms, such as explicit AI-generated images of female public figures that violated their rights to privacy and protection. While more work remains, Meta credits the Board’s initiatives as the catalyst for its AI-labeling program, which successfully labeled hundreds of millions of AI-generated or manipulated pieces of content in just one month.
We have also urged Meta to treat AI-generated posts uniformly across various formats—audio, images, and video—especially in high-stakes situations such as elections and financial scams, where the impact of fraudulent content can be significant. Currently, we are deliberating on how to address pressing issues, including AI content that exacerbates conflict, such as the recent tensions between Israel and Iran.
While social media and large language models (LLMs) serve different functions—social media primarily dealing with user-generated posts and LLMs synthesizing information from various sources—users of both platforms need mechanisms for objection and redress against content they find harmful or hateful. Both social media platforms and LLMs operate on a global scale and therefore require policies that respect local nuances while ensuring consistency across international borders. The diversity of languages within India alone necessitates tailored approaches to LLM development that are inclusive of local contexts.
Independent oversight plays a crucial role in ensuring that diverse local voices are included in governance; the Oversight Board itself comprises members from over a dozen nationalities. This representation has, for instance, enabled the restoration of content that moderators initially flagged but were later found to have been misinterpreted.
In the rapidly evolving landscape of AI development, independent ethical decision-making bodies can motivate companies to adopt better, rights-respecting policies. To date, Meta remains the only company that has subjected its platforms to meaningful independent oversight and public scrutiny. Other companies that currently rely on advisory groups should consider implementing similar oversight, whether through our organization or another entity, drawing on our extensive experiences to navigate these challenges efficiently.
A noticeable difference between social media and LLMs lies in their content policies. Social media platforms often have extensive content management guidelines, while those for LLMs are typically minimal. For instance, Meta AI’s user policy spans just over three pages, while OpenAI’s guidelines consist of fewer than 1,000 words. In contrast, Anthropic’s recently unveiled “constitution” lacks any means of external enforcement.
Though it could be argued that AI companies resist moderation, instances of moderation have already occurred; for example, Meta has banned its LLMs from engaging in impersonation and disinformation. It is inevitable that regulations governing the permissibility of content on AI platforms will emerge, similar to those in social media. AI companies face a choice: act proactively to establish these regulations or be compelled to comply through legislative measures. At this critical juncture for AI development in India and globally, independent oversight can facilitate responsible growth that meets the needs of local users while satisfying international markets, thereby reassuring investors keen on making India the next AI super hub.
Sudhir Krishnaswamy is Vice Chancellor and Professor of Law at the National Law School of India University and a member of the Oversight Board. (Disclaimer: These are the personal opinions of the writer. They do not reflect the views of www.business-standard.com or the Business Standard newspaper)
See also
DreamWeaver Launches AI Storytelling Platform, Seeks $500,000 Seed Funding to Enhance Long-Distance Connections
2026: Students Must Adapt Skills Now as AI Disrupts Entry-Level Job Market
Anthropic CEO Dario Amodei Cautions Against OpenAI’s Risky AI Compute Investments
Sandisk’s 1,220% Surge Outpaces Micron: Why It’s the Superior AI Investment Now



















































