OpenAI is facing allegations of violating California’s newly enacted AI safety law following the release of its latest coding model, GPT-5.3-Codex, last week. The claims come from the AI watchdog group, the Midas Project, which argues that the company did not adhere to its own safety commitments, potentially exposing OpenAI to substantial fines under California law.
The controversy surrounding GPT-5.3-Codex highlights significant concerns about cybersecurity risks associated with AI models. As part of OpenAI’s efforts to regain its competitive edge in AI-powered coding, the company touted improved performance metrics for the new model when compared to prior versions and competitors such as Anthropic. However, the model’s launch has raised alarms over its classification as a high-risk model, which could facilitate substantial cyber harm if exploited.
CEO Sam Altman noted that GPT-5.3-Codex is categorized as a “high” risk in OpenAI’s internal Preparedness Framework. This classification indicates that the model possesses capabilities that, if misused, could lead to serious cybersecurity incidents. Altman emphasized that this model’s capabilities necessitate a more cautious approach to safety measures.
The allegations center on California’s Senate Bill 53 (SB 53), which came into effect in January. This law mandates that major AI companies must develop and adhere to their safety frameworks designed to mitigate catastrophic risks—defined as incidents resulting in over 50 fatalities or $1 billion in property damage. It also prohibits misleading compliance statements, which are now legally binding.
According to the Midas Project, OpenAI’s safety framework includes stringent safeguards for models classified as having high cybersecurity risks. These safeguards are intended to prevent the AI from acting in ways that could compromise safety, such as engaging in deceptive behavior or concealing its true features. Despite this, OpenAI proceeded with the model’s launch without implementing these necessary protections, prompting allegations of noncompliance.
In defense, OpenAI claimed the framework’s language was “ambiguous.” The company explained that safeguards are deemed necessary only when high cyber risk coincides with “long-range autonomy,” the ability of an AI to operate independently for extended periods. OpenAI maintains that GPT-5.3-Codex does not possess such autonomy, thereby justifying the absence of additional safeguards.
A spokesperson for OpenAI told Fortune that the company feels “confident in our compliance with frontier safety laws, including SB 53.” They indicated that GPT-5.3-Codex underwent a thorough testing and governance process, as detailed in its publicly released system card. Internal expert evaluations, including those from the Safety Advisory Group, supported the assertion that the model lacks long-range autonomy capabilities.
However, the Midas Project and some safety researchers have questioned this interpretation. Nathan Calvin, vice president of state affairs and general counsel at Encode, expressed skepticism about OpenAI’s rationale, arguing that the documentation does not present an ambiguous situation. Calvin’s recent commentary on social media suggested that OpenAI’s claims about the framework’s ambiguity may be a cover for not following through on established safety plans.
Moreover, the Midas Project has contended that OpenAI cannot definitively prove that the model lacks the required autonomy for the additional safety measures. They pointed out that OpenAI’s previous models have already demonstrated high benchmarks in autonomous task completion, raising further concerns about the decision-making process behind GPT-5.3-Codex’s release.
Tyler Johnston, founder of the Midas Project, characterized the potential violation as “especially embarrassing,” given the relatively low compliance threshold established by SB 53. He highlighted that the law essentially requires companies to adopt a voluntary safety plan and communicate their adherence accurately, allowing for updates as needed without engaging in misleading practices.
If the allegations are substantiated and an investigation is initiated, OpenAI could face significant penalties under SB 53, potentially amounting to millions of dollars depending on the severity and duration of any noncompliance. The California Attorney General’s Office has indicated its commitment to enforcing state laws aimed at enhancing transparency and safety in the AI sector, though it has refrained from commenting on any specific investigations.
This unfolding situation underscores the challenges and responsibilities faced by AI companies as they navigate regulatory landscapes while striving to innovate. As the discourse around AI safety intensifies, the implications of this case could set important precedents for the industry at large.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health






















































