The global approach to regulating artificial intelligence (AI) appears to be shifting towards a framework based on ethical pledges rather than enforceable laws. This trend was underscored at the recent India AI Impact Summit in New Delhi, where over 250,000 citizens pledged to use AI ethically, helping India set a Guinness World Record. Prime Minister Narendra Modi introduced the MANAV Vision, a set of five AI governance principles inspired by the Sanskrit word for “human.” The summit also saw the signing of the Delhi Declaration by 89 countries, although none of its provisions are legally binding.
This choice reflects India’s decision to adopt a flexible regulatory approach, in stark contrast to the European Union’s binding AI Act passed in 2024. Indian officials are favoring “flexible guardrails over rigid compliance,” a sentiment echoed by the United States, which under the Trump administration, dismissed prior executive orders on AI in favor of voluntary industry commitments.
The emerging consensus among nations appears less about how to regulate AI and more about avoiding regulation altogether. Countries are increasingly adopting moral language, discussing “ethical frameworks,” “values-based approaches,” and “human-centric design.” For instance, Harvard is offering a course addressing the intersection of mindfulness and AI ethics, while recent discussions among Christian scholars at a National Religious Broadcasters convention have highlighted the need for moral frameworks as AI transforms human relationships.
These dialogues are significant, but they do not equate to regulation. A pledge is far from a legal mandate, and the growing focus on ethical discussions lacks the teeth necessary for enforceable governance. In South Africa, the quest for effective AI regulation is complicated by a lack of clear guidelines. The country’s forthcoming national AI policy, projected for completion in the 2026-2027 financial year, is expected to adopt a “sector-specific, risk-based approach,” layered onto existing laws rather than establishing standalone regulations.
In contrast, a high-stakes confrontation is currently unfolding involving the Pentagon and Anthropic’s Constitutional AI (CAI). The Pentagon has dismissed Anthropic’s self-imposed ethical guidelines regarding mass surveillance and autonomous weapons as “woke AI,” demanding unfettered access to the AI model. Defense Secretary Pete Hegseth’s ultimatum has raised serious questions about the viability of ethical guardrails in a military context, culminating in Hegseth designating Anthropic as a “supply-chain risk” to national security, isolating the company from federal contracts.
The fallout of this standoff has immediate implications for the broader AI landscape. OpenAI has stepped in to fill the gap, announcing a new agreement with the Pentagon while maintaining its own ethical standards regarding military applications. This shift highlights the fragility of partnerships between tech firms and government entities and raises concerns about the efficacy of ethical frameworks when confronted with national security interests.
As South Africa waits for its AI policy to materialize, it finds itself at a significant crossroads. Automated decision-making systems are already in use across various sectors, including finance and human resources, yet without dedicated oversight mechanisms. Algorithms that generate false information or make biased decisions can operate unchecked, leaving citizens with limited means for recourse.
The discourse around “mindful AI” and “ethical guardrails” takes on a different tone in a context like South Africa’s, where such frameworks are still largely theoretical. While India’s MANAV Vision has mobilized a global dialogue around AI principles, South Africa’s regulatory approach remains in limbo. The upcoming public comment period for the national AI policy presents an opportunity for civil society and various stakeholders to influence the direction of AI governance.
The growing emphasis on ethical pledges over legal requirements reflects a larger global challenge: AI systems are evolving more rapidly than legislative processes can keep up with. As nations grapple with these advancements, the question remains whether moral language will suffice in the absence of enforceable laws. South Africa’s forthcoming AI policy may ultimately determine whether the country adopts a robust regulatory framework or follows suit with a series of aspirational principles, echoing the experiences of other nations. The world watches as South Africa navigates its own path in the intricate landscape of AI regulation.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health




















































