Connect with us

Hi, what are you looking for?

AI Regulation

OpenAI’s GPT-5.3-Codex Release Allegedly Violates California AI Safety Law

OpenAI’s GPT-5.3-Codex launch faces allegations of violating California’s SB 53 safety law, risking millions in fines for noncompliance.

OpenAI is facing allegations of violating California’s newly enacted AI safety law following the release of its latest coding model, GPT-5.3-Codex, last week. The claims come from the AI watchdog group, the Midas Project, which argues that the company did not adhere to its own safety commitments, potentially exposing OpenAI to substantial fines under California law.

The controversy surrounding GPT-5.3-Codex highlights significant concerns about cybersecurity risks associated with AI models. As part of OpenAI’s efforts to regain its competitive edge in AI-powered coding, the company touted improved performance metrics for the new model when compared to prior versions and competitors such as Anthropic. However, the model’s launch has raised alarms over its classification as a high-risk model, which could facilitate substantial cyber harm if exploited.

CEO Sam Altman noted that GPT-5.3-Codex is categorized as a “high” risk in OpenAI’s internal Preparedness Framework. This classification indicates that the model possesses capabilities that, if misused, could lead to serious cybersecurity incidents. Altman emphasized that this model’s capabilities necessitate a more cautious approach to safety measures.

The allegations center on California’s Senate Bill 53 (SB 53), which came into effect in January. This law mandates that major AI companies must develop and adhere to their safety frameworks designed to mitigate catastrophic risks—defined as incidents resulting in over 50 fatalities or $1 billion in property damage. It also prohibits misleading compliance statements, which are now legally binding.

According to the Midas Project, OpenAI’s safety framework includes stringent safeguards for models classified as having high cybersecurity risks. These safeguards are intended to prevent the AI from acting in ways that could compromise safety, such as engaging in deceptive behavior or concealing its true features. Despite this, OpenAI proceeded with the model’s launch without implementing these necessary protections, prompting allegations of noncompliance.

In defense, OpenAI claimed the framework’s language was “ambiguous.” The company explained that safeguards are deemed necessary only when high cyber risk coincides with “long-range autonomy,” the ability of an AI to operate independently for extended periods. OpenAI maintains that GPT-5.3-Codex does not possess such autonomy, thereby justifying the absence of additional safeguards.

A spokesperson for OpenAI told Fortune that the company feels “confident in our compliance with frontier safety laws, including SB 53.” They indicated that GPT-5.3-Codex underwent a thorough testing and governance process, as detailed in its publicly released system card. Internal expert evaluations, including those from the Safety Advisory Group, supported the assertion that the model lacks long-range autonomy capabilities.

However, the Midas Project and some safety researchers have questioned this interpretation. Nathan Calvin, vice president of state affairs and general counsel at Encode, expressed skepticism about OpenAI’s rationale, arguing that the documentation does not present an ambiguous situation. Calvin’s recent commentary on social media suggested that OpenAI’s claims about the framework’s ambiguity may be a cover for not following through on established safety plans.

Moreover, the Midas Project has contended that OpenAI cannot definitively prove that the model lacks the required autonomy for the additional safety measures. They pointed out that OpenAI’s previous models have already demonstrated high benchmarks in autonomous task completion, raising further concerns about the decision-making process behind GPT-5.3-Codex’s release.

Tyler Johnston, founder of the Midas Project, characterized the potential violation as “especially embarrassing,” given the relatively low compliance threshold established by SB 53. He highlighted that the law essentially requires companies to adopt a voluntary safety plan and communicate their adherence accurately, allowing for updates as needed without engaging in misleading practices.

If the allegations are substantiated and an investigation is initiated, OpenAI could face significant penalties under SB 53, potentially amounting to millions of dollars depending on the severity and duration of any noncompliance. The California Attorney General’s Office has indicated its commitment to enforcing state laws aimed at enhancing transparency and safety in the AI sector, though it has refrained from commenting on any specific investigations.

This unfolding situation underscores the challenges and responsibilities faced by AI companies as they navigate regulatory landscapes while striving to innovate. As the discourse around AI safety intensifies, the implications of this case could set important precedents for the industry at large.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Cybersecurity

Fortinet shares fell 3.62% to $78.10 after Anthropic's AI data leak raised cybersecurity concerns, highlighting vulnerabilities in legacy security solutions.

Top Stories

Mistral launches Voxtral TTS, an open-source model supporting nine languages for edge devices, enhancing voice applications with real-time performance and minimal audio input.

AI Regulation

NeurIPS apologizes for a controversial policy barring submissions from 873 Chinese entities, amidst widespread boycotts from China's tech community.

Top Stories

DeepSeek prepares to launch its most advanced language model, competing directly with OpenAI's newly completed GPT-5.5, as AI scalability challenges intensify.

AI Cybersecurity

Concerns mount over Anthropic's unconfirmed "Claude Mythos," an AI model potentially capable of generating exploit code to compromise cybersecurity defenses.

AI Education

OpenAI unveils the ChatGPT 26 program to support 26 student AI innovators, while the FTC forms a Healthcare Task Force to enhance patient protections.

AI Cybersecurity

Anthropic tests its advanced AI model Claude Mythos amid cybersecurity risks, revealing plans for a Capybara tier designed to surpass previous models in security...

AI Regulation

Alabama, Arizona, and California advance critical AI legislation, focusing on child safety and transparency, as states seek to regulate emerging technologies.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.