Amnesty International criticized the recent Indian AI Impact Summit 2026 for failing to secure meaningful commitments from governments and technology companies to halt “destructive practices” associated with artificial intelligence. The organization underscored that the summit did not adequately address the human rights risks linked to AI deployment, which includes controversial applications such as predictive policing, biometric surveillance, and automated welfare administration.
Amnesty’s concerns reflect ongoing issues in AI governance, particularly the reliance on voluntary pledges and industry standards that do not replace the need for enforceable regulations. The group emphasized that such frameworks are essential to prevent rights violations and ensure access to remedy for those affected. Critics of the summit’s outcomes, including the Internet Freedom Foundation, described the event as largely a “spectacle,” asserting that it prioritized technological ambitions and geopolitical positioning over accountability measures.
The summit’s focus on technological advancement came under fire from a coalition of digital rights groups, which noted that it overlooked key recommendations from grassroots organizations. These recommendations included calls for transparency obligations and independent oversight mechanisms. This lack of meaningful engagement with community voices raises questions about the inclusivity of the dialogue surrounding AI governance.
Concerns regarding AI’s impact on marginalized populations were further highlighted in analyses released in conjunction with the summit. An international non-profit organization documented evidence that AI systems can disproportionately harm racial and religious minorities, migrants, and low-income groups, particularly in contexts such as border management and law enforcement. In a country like India, characterized by caste divisions and religious tensions, these biases could inflict irreversible damage on already vulnerable communities, including migrants.
Earlier this year, in April 2024, Amnesty International warned that automated social protection systems in India and other regions risked excluding individuals from essential welfare benefits due to flawed data and algorithmic bias, compounded by insufficient human oversight. The organization’s findings highlight the urgency of prioritizing human rights in the development and governance of AI technologies.
Policy analysts have echoed these sentiments, urging governments to center human rights in AI governance frameworks. The Observer Research Foundation articulated that AI policy should focus on placing “people at the heart of the AI story,” stressing the importance of participatory governance and safeguards against algorithmic discrimination. Their analysis advocates embedding rights protections at the design stage of AI systems rather than relying on corrective measures implemented post hoc.
The backdrop of the summit coincided with increasing global efforts to address AI governance challenges. In July 2024, the UN General Assembly adopted a draft resolution aimed at bridging the AI gap for developing countries and promoting equitable access to AI technologies. However, Amnesty International stressed that global commitments must be accompanied by robust domestic legal frameworks that explicitly prohibit rights-violating applications of AI.
The discussions at the Indian AI Impact Summit have ignited a broader conversation about the necessity of balancing technological progress with ethical considerations. As AI technologies continue to evolve and permeate various facets of daily life, the imperative for accountable governance structures becomes increasingly urgent. The summit serves as a pivotal moment in the ongoing dialogue regarding the intersection of AI, human rights, and social justice, underscoring the need for continued advocacy and reform in the field.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health




















































