Connect with us

Hi, what are you looking for?

Top Stories

India’s AI Summit: A Call for Global Oversight to Shape Responsible Tech Governance

India’s AI Impact Summit aims to position the country as a responsible AI leader, advocating for a regulatory framework that balances innovation with user rights oversight.

AI Governance in India

Global CEOs and technology leaders are set to gather in New Delhi for the AI Impact Summit, which will attract tens of thousands of attendees. Investors, technology firms, and governments are increasingly viewing India as a burgeoning centre for artificial intelligence (AI), a site to test scalability, accelerate talent development, and establish AI frameworks in the Global South. However, India’s ascent in the AI space will depend not only on the business it attracts or the high-profile figures on its discussion panels but also on its willingness to set regulatory standards alongside creating opportunities.

Investors are not just seeking innovative strategies for growth; they also desire assurances that responsible governance will accompany new technologies. This demand is particularly crucial in markets traditionally exposed to the adverse effects of unregulated technology implementations. If the Global South is to play a pivotal role in shaping the future of AI, it must lead in establishing respect for user rights and accountability from the beginning.

India has a unique opportunity to exemplify that responsible AI development is not a hindrance to growth but rather a competitive advantage. Companies looking to operate in India should anticipate a regulatory framework that embraces both innovation and responsibility. The Indian government has proposed seven principles for responsible AI, advocating for an independent oversight model as the most effective form of regulation. The way India defines its approach will likely influence AI industry standards and regulations beyond its borders.

As a member of the Oversight Board, I can assert that this body serves as a vital mechanism for ensuring human rights oversight over Big Tech’s decisions. It independently addresses some of Meta’s most contentious content issues through a consistent framework rooted in human rights principles. This not only provides redress for users but also promotes accountability and transparency, which are beneficial for businesses that advertise on Meta’s platforms and for investors prioritizing responsible investments.

The Oversight Board has defended free expression while determining when it is permissible to impose limits to prevent real-world harm, including violence. It has influenced Meta’s policies and processes in significant ways—independent from government interference or commercial interests—by making regulations available in multiple languages and ensuring users are informed about the standards they may have violated before any punitive actions are taken.

Independent oversight can also be applied to AI governance, as the Board has already ruled on AI-generated content and automated moderation. In a notable case involving manipulated images of President Joe Biden, the Board clarified that AI generation or modification of images should not inherently justify removal, as this could restrict free speech. However, labeling content as fake can help mitigate potential harm by alerting users that what they are viewing is not genuine.

In circumstances where harm must be prevented, content has been ordered removed from Meta’s platforms, such as explicit AI-generated images of female public figures that violated their rights to privacy and protection. While more work remains, Meta credits the Board’s initiatives as the catalyst for its AI-labeling program, which successfully labeled hundreds of millions of AI-generated or manipulated pieces of content in just one month.

We have also urged Meta to treat AI-generated posts uniformly across various formats—audio, images, and video—especially in high-stakes situations such as elections and financial scams, where the impact of fraudulent content can be significant. Currently, we are deliberating on how to address pressing issues, including AI content that exacerbates conflict, such as the recent tensions between Israel and Iran.

While social media and large language models (LLMs) serve different functions—social media primarily dealing with user-generated posts and LLMs synthesizing information from various sources—users of both platforms need mechanisms for objection and redress against content they find harmful or hateful. Both social media platforms and LLMs operate on a global scale and therefore require policies that respect local nuances while ensuring consistency across international borders. The diversity of languages within India alone necessitates tailored approaches to LLM development that are inclusive of local contexts.

Independent oversight plays a crucial role in ensuring that diverse local voices are included in governance; the Oversight Board itself comprises members from over a dozen nationalities. This representation has, for instance, enabled the restoration of content that moderators initially flagged but were later found to have been misinterpreted.

In the rapidly evolving landscape of AI development, independent ethical decision-making bodies can motivate companies to adopt better, rights-respecting policies. To date, Meta remains the only company that has subjected its platforms to meaningful independent oversight and public scrutiny. Other companies that currently rely on advisory groups should consider implementing similar oversight, whether through our organization or another entity, drawing on our extensive experiences to navigate these challenges efficiently.

A noticeable difference between social media and LLMs lies in their content policies. Social media platforms often have extensive content management guidelines, while those for LLMs are typically minimal. For instance, Meta AI’s user policy spans just over three pages, while OpenAI’s guidelines consist of fewer than 1,000 words. In contrast, Anthropic’s recently unveiled “constitution” lacks any means of external enforcement.

Though it could be argued that AI companies resist moderation, instances of moderation have already occurred; for example, Meta has banned its LLMs from engaging in impersonation and disinformation. It is inevitable that regulations governing the permissibility of content on AI platforms will emerge, similar to those in social media. AI companies face a choice: act proactively to establish these regulations or be compelled to comply through legislative measures. At this critical juncture for AI development in India and globally, independent oversight can facilitate responsible growth that meets the needs of local users while satisfying international markets, thereby reassuring investors keen on making India the next AI super hub.

Sudhir Krishnaswamy is Vice Chancellor and Professor of Law at the National Law School of India University and a member of the Oversight Board. (Disclaimer: These are the personal opinions of the writer. They do not reflect the views of www.business-standard.com or the Business Standard newspaper)

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Tools

Intellect Design Arena targets $1B revenue potential by launching an affordable AI platform for mid-sized firms, enhancing adoption in key sectors.

Top Stories

India hosts the India-AI Impact Summit 2026 from February 16-20, uniting leaders from 45 countries to address AI's role in sustainable global progress.

Top Stories

Sundar Pichai and Sam Altman will lead discussions at the 2026 India AI Impact Summit in New Delhi, focusing on AI's transformative potential for...

AI Finance

Indian government warns against an AI-generated video falsely claiming ₹16 lakh monthly returns from a fraudulent investment scheme endorsed by Finance Minister Sitharaman.

AI Tools

India aims to unlock $957 billion in economic value by 2035 through an AI applications stack, focusing on healthcare, agriculture, and ethical innovation.

AI Technology

India's five-day AI summit in New Delhi aims to showcase the nation's strategy for AI innovation, attracting global tech leaders to drive economic growth...

AI Research

MIT's J-PAL secures funding for Project AI Evidence, launching eight studies to evaluate AI's effectiveness in combating poverty with backing from Google.org and others.

Top Stories

India's 2026 AI regulation mandates two-hour takedowns for unlawful content, imposing strict compliance risks on global tech firms operating in the country.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.