The regulation of technology, particularly artificial intelligence (AI), has emerged as a contentious issue between the United States and the European Union. The implications of how this regulatory landscape evolves are profound, influencing not just transatlantic relations but also the broader global tech ecosystem.
A recent example highlighting this tension is the release of the Claude Mythos tool by Anthropic, a US-based AI firm, which debuted last week. Claimed to be the most advanced model for detecting cybersecurity risks, Claude Mythos underscores the rapid pace of innovation within the AI sector.
Central to the controversy is the level of regulatory engagement during the development of Claude Mythos. Representatives from Ireland’s National Cyber Security Centre (NCSC) testified before the Oireachtas Communications Committee last Tuesday, noting that while they reviewed the technical documentation released by Anthropic, the engagement with regulatory bodies was minimal. The NCSC confirmed that the capabilities outlined by Anthropic suggest a significant evolution in how hardware and software vulnerabilities can be identified and addressed.
This experience is reflective of a broader trend across EU member states. While national regulators received a preview of the technical materials, there was a notable absence of extensive consultation, which has elicited considerable concern within the bloc.
In defense, Anthropic argues that the limited availability of Claude Mythos, accessible only to around 40 technology firms, negates the necessity for standard regulatory protocols. Yet, this rationale has not assuaged the unease felt throughout the EU regarding oversight of advanced AI technologies.
The introduction of the EU AI Act in 2024 aimed to create a structured regulatory framework for AI technologies. However, its effectiveness has been called into question, particularly in light of opposition from the previous US administration under Donald Trump. On a recent visit to Budapest, US Vice President JD Vance criticized the European Commission for what he described as an excessively intrusive approach to regulating American tech companies.
Contrasting the EU’s regulatory stance, the White House has leaned towards the argument that US tech firms possess the best understanding of their industry and that any measures beyond self-regulation could hinder the growth and innovation potential of AI. This stance has been bolstered by pro-AI advocacy groups funded by tech companies, which are reportedly amassing a campaign fund of $300 million for the upcoming midterm elections, primarily targeting candidates—mostly Democrats—who advocate for stricter regulatory measures.
Historically, reliance on self-regulation has not yielded favorable results. The late 1990s and early 2000s saw the financial sector push for a lenient regulatory regime, arguing that rigorous oversight would stifle economic growth. The outcome was the devastating global financial crisis of 2008. Today, many experts contend that the risks inherent in unregulated AI development far surpass those witnessed in the financial sector. A globally coordinated framework of checks and controls is deemed essential to mitigate these risks effectively.
As discussions continue regarding the regulation of AI, the divergent approaches between the US and EU underscore a pivotal moment in shaping the future of technology governance. The implications of these regulatory decisions will resonate beyond borders, determining how innovation is balanced with safety and ethical standards in an increasingly digitized world.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health

















































