Competition is hindering the establishment of effective artificial intelligence (AI) safety standards, with companies like Anthropic retracting their previously strong safety commitments. Anthropic stated, “We didn’t really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments … if competitors are blazing ahead.” Such pressure forces companies prioritizing safety to delay model deployment, risking customer loss and jeopardizing future investments. The issue is compounded by OpenAI’s similar response, which has included reducing pre-deployment safety testing times. Experts argue that a cohesive regulatory framework must tackle these collective action challenges at the heart of AI risks.
Despite widespread calls for AI regulation, including from 80% of U.S. adults in favor of prioritizing safety even at the cost of innovation speed, no consensus has emerged on how to implement such oversight. Both OpenAI and Anthropic CEOs have expressed the need for regulation, yet the design for an effective regulatory body remains inadequately addressed. While several proposals have been tabled for a dedicated U.S. AI regulator, the discourse surrounding AI governance has yet to mature significantly.
A viable solution does not necessitate creating an entirely new institutional structure. For nearly a century, financial regulators have successfully utilized federally supervised self-regulatory organizations (SROs), like FINRA, to govern industries through binding rules that are subject to government approval. Currently, every major AI frontier lab, excluding Elon Musk’s xAI, belongs to the Frontier Model Forum, which coordinates risk management among its members but lacks statutory authority and mandatory membership requirements.
Challenges for AI Regulation
Any proposed regulatory institution must confront several critical challenges. First, a race to the bottom on safety is apparent in the competitive landscape among labs, leading to a collective action problem. Recent high-profile resignations over safety concerns indicate that competitive pressures continue to overshadow regulatory efforts. Second, potential regulators face significant information asymmetry, as essential details regarding training data and safety evaluations remain proprietary and difficult for outsiders to access. This barrier complicates risk evaluation and mitigation efforts.
Moreover, the rapid pace of AI development presents a significant “pacing problem.” Regulatory frameworks must adapt swiftly to keep up with technological advancements, as demonstrated by the gap between GPT-3 and GPT-4. A robust regulatory body must be capable of timely revisions without waiting for slow legislative processes. Lastly, if the warnings from AI companies about potential catastrophic risks are credible, the regulation must include proactive measures such as pre-deployment evaluations to mitigate irreversible harm.
The structure of SROs in finance offers a potential roadmap for AI governance. The Securities and Exchange Commission (SEC) has long relied on SROs like FINRA and stock exchanges to oversee financial markets. SROs not only establish rules but also monitor member operations, conduct examinations, and impose sanctions for violations. This system represents a blend of industry expertise and public accountability that could be adapted for AI.
An AI SRO would focus on catastrophic risks identified by frontier labs, including threats from advanced cyberattacks and autonomous AI behavior. Legislative action could establish a supervising agency requiring all AI companies that meet specific criteria (e.g., scale, revenue) to join. This structure would formalize the Frontier Model Forum’s role, enabling it to craft rules under government oversight while preserving its existing mission.
The advantages of such an SRO model are compelling. By mandating membership, it would mitigate the race to the bottom in safety investments—ensuring that no lab faces a competitive disadvantage for prioritizing safety. Additionally, the SRO framework would allow for rapid updates to safety protocols, addressing the pacing problem inherent in AI development. This model could incorporate requirements for pre-deployment testing, public safety disclosures, and third-party audits, fostering an environment where safety is prioritized alongside innovation.
While various regulatory approaches have been proposed, many share characteristics of an SRO but lack its formal structure and oversight. The regulatory markets proposal, for instance, outlines a government-organized group to set standards but does not encompass the comprehensive governance that an SRO would provide. Likewise, California’s SB-813 contemplates a commission for standard-setting but fails to ensure industry participation and oversight, perpetuating the information asymmetry that hinders effective regulation.
Though the SRO model is not without its weaknesses—such as the potential for regulatory capture and conflicts of interest—it offers a flexible, expert-driven approach that has evolved through nearly a century of financial regulation. An AI SRO could learn from these historical lessons, ensuring that oversight evolves with the technology it seeks to regulate.
Ultimately, establishing an AI SRO is politically viable and has the potential to harmonize the interests of innovation and safety. Both camps face significant hurdles to achieving their regulatory ambitions independently. An SRO could facilitate binding safety standards while allowing industry stakeholders to engage directly in rule-making, easing the path for labs to invest in safety without suffering competitive penalties. The institutional framework exists; the time has come for decisive action to safeguard the future of AI.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health






















































