Debates over the governance of artificial intelligence (AI) have intensified, as many stakeholders acknowledge its potential to be transformative across various sectors. However, a more pressing question remains: how will the benefits and risks of AI be distributed? Will it truly be a case where everyone wins, or will there be significant disparities between those who benefit from AI advancements and those who do not?
Proponents of AI often argue that its integration will lead to overall prosperity, suggesting that “a rising tide lifts all boats.” Critics, however, highlight the risks of exacerbating inequality and environmental degradation, dismissing the notion that AI will autonomously resolve these challenges. Some voices within the AI development community express fears of a dystopian future, where AI, either through misaligned objectives or the emergence of superintelligence, could turn against humanity.
Amid these extremes, a middle ground is emerging that examines the potential gains and losses from AI. Discussions in realist circles often frame AI as an arms race, particularly between Western nations and China. This framing underscores the shifting dynamics of economic and political power, emphasizing the growing influence of tech giants over traditional government authority.
AI is shifting economic and, increasingly, political power away from governments.
Conversely, an alternative perspective examines the North-South divide, where over 750 million people lack stable electricity and 2 billion are unconnected to the internet. Many in developing countries express concerns about being left behind in the AI revolution, emphasizing the missed opportunities rather than the technology’s potential misuse. Yet, perhaps the most critical divide lies not in geography but between public and private sectors. The power wielded by major tech companies today rivals that of historical entities like the East India Company, which controlled half of global trade in the early 19th century.
Governments struggle to adapt to this rapidly evolving landscape. China has demonstrated that a determined state can reassert control over its tech sector, imposing significant restrictions on major firms. The European Union is attempting to combat this trend with the introduction of its AI Act, yet early signs indicate hesitance amid fears of economic repercussions. The U.S. has shown a reluctance to enact federal regulations, allowing states to take the lead on AI legislation.
This regulatory paralysis is understandable; AI is associated with economic growth and competitive advantage, leading politicians to fear that stringent regulations could stifle innovation. Technology companies, equipped with extensive lobbying resources, are deeply embedded in the daily lives of consumers while simultaneously enhancing surveillance and displacing labor.
So, what measures can be taken to mitigate the risks associated with AI? If self-regulation by companies is deemed unreliable, and governments are hesitant to legislate, then the responsibility may fall to the users. Consumers can express their disapproval by choosing not to support companies that fail to prioritize safety and social equity. However, the challenge remains that individual consumers often have little leverage against corporations driven by profit motives.
Organized user groups might offer a solution, as collective action has previously demonstrated the ability to influence market practices. Movements advocating for global privacy and protection could similarly inspire norms around AI development, emphasizing responsible use and greater transparency in how algorithms operate and are trained.
The first true AI emergency may not be an existential catastrophe but the steady hollowing out of public authority.
Transparency concerning the hidden costs of AI, including its environmental impact, is increasingly crucial. Companies announcing a retreat from climate commitments in favor of AI investments highlight the urgent need for accountability. By disclosing resource consumption, such as electricity and water usage, businesses could face pressure from informed users to adopt more sustainable practices.
Market mechanisms alone, however, will not suffice. The 2007–08 financial crisis highlighted that organizations deemed “too big to fail” were arguably too powerful to exist unregulated. Current antitrust actions by the U.S. Justice Department against Google and Apple, along with efforts by the Federal Trade Commission against Amazon, indicate a growing recognition of this issue. Conversely, the EU has introduced stringent obligations for “gatekeepers” under the Digital Markets Act, yet real progress in breaking up large tech entities remains elusive.
While some advocate for nationalization of tech infrastructure, viewing it as essential to national security and economic stability, such proposals have yet to gain traction in Western countries. Concerns about hindering innovation or falling behind international rivals complicate these discussions.
International organizations face a greater challenge, lacking the impetus that a singular catastrophic event might provide. Unlike the clear existential threat posed by nuclear weapons in the 1950s, the dangers posed by AI are diffuse and complex. Without a unifying crisis to galvanize action, global coordination remains problematic.
The potential risks associated with AI cannot be overlooked. Even if apocalyptic scenarios do not materialize, a more subtle transformation is taking place. The authority traditionally held by states is increasingly being transferred to private entities. The pressing question is not whether AI will be governed, but by whom.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health





















































