Connect with us

Hi, what are you looking for?

AI Regulation

OpenAI’s GPT-5.3-Codex Release Allegedly Violates California AI Safety Law

OpenAI’s GPT-5.3-Codex launch faces allegations of violating California’s SB 53 safety law, risking millions in fines for noncompliance.

OpenAI is facing allegations of violating California’s newly enacted AI safety law following the release of its latest coding model, GPT-5.3-Codex, last week. The claims come from the AI watchdog group, the Midas Project, which argues that the company did not adhere to its own safety commitments, potentially exposing OpenAI to substantial fines under California law.

The controversy surrounding GPT-5.3-Codex highlights significant concerns about cybersecurity risks associated with AI models. As part of OpenAI’s efforts to regain its competitive edge in AI-powered coding, the company touted improved performance metrics for the new model when compared to prior versions and competitors such as Anthropic. However, the model’s launch has raised alarms over its classification as a high-risk model, which could facilitate substantial cyber harm if exploited.

CEO Sam Altman noted that GPT-5.3-Codex is categorized as a “high” risk in OpenAI’s internal Preparedness Framework. This classification indicates that the model possesses capabilities that, if misused, could lead to serious cybersecurity incidents. Altman emphasized that this model’s capabilities necessitate a more cautious approach to safety measures.

The allegations center on California’s Senate Bill 53 (SB 53), which came into effect in January. This law mandates that major AI companies must develop and adhere to their safety frameworks designed to mitigate catastrophic risks—defined as incidents resulting in over 50 fatalities or $1 billion in property damage. It also prohibits misleading compliance statements, which are now legally binding.

According to the Midas Project, OpenAI’s safety framework includes stringent safeguards for models classified as having high cybersecurity risks. These safeguards are intended to prevent the AI from acting in ways that could compromise safety, such as engaging in deceptive behavior or concealing its true features. Despite this, OpenAI proceeded with the model’s launch without implementing these necessary protections, prompting allegations of noncompliance.

In defense, OpenAI claimed the framework’s language was “ambiguous.” The company explained that safeguards are deemed necessary only when high cyber risk coincides with “long-range autonomy,” the ability of an AI to operate independently for extended periods. OpenAI maintains that GPT-5.3-Codex does not possess such autonomy, thereby justifying the absence of additional safeguards.

A spokesperson for OpenAI told Fortune that the company feels “confident in our compliance with frontier safety laws, including SB 53.” They indicated that GPT-5.3-Codex underwent a thorough testing and governance process, as detailed in its publicly released system card. Internal expert evaluations, including those from the Safety Advisory Group, supported the assertion that the model lacks long-range autonomy capabilities.

However, the Midas Project and some safety researchers have questioned this interpretation. Nathan Calvin, vice president of state affairs and general counsel at Encode, expressed skepticism about OpenAI’s rationale, arguing that the documentation does not present an ambiguous situation. Calvin’s recent commentary on social media suggested that OpenAI’s claims about the framework’s ambiguity may be a cover for not following through on established safety plans.

Moreover, the Midas Project has contended that OpenAI cannot definitively prove that the model lacks the required autonomy for the additional safety measures. They pointed out that OpenAI’s previous models have already demonstrated high benchmarks in autonomous task completion, raising further concerns about the decision-making process behind GPT-5.3-Codex’s release.

Tyler Johnston, founder of the Midas Project, characterized the potential violation as “especially embarrassing,” given the relatively low compliance threshold established by SB 53. He highlighted that the law essentially requires companies to adopt a voluntary safety plan and communicate their adherence accurately, allowing for updates as needed without engaging in misleading practices.

If the allegations are substantiated and an investigation is initiated, OpenAI could face significant penalties under SB 53, potentially amounting to millions of dollars depending on the severity and duration of any noncompliance. The California Attorney General’s Office has indicated its commitment to enforcing state laws aimed at enhancing transparency and safety in the AI sector, though it has refrained from commenting on any specific investigations.

This unfolding situation underscores the challenges and responsibilities faced by AI companies as they navigate regulatory landscapes while striving to innovate. As the discourse around AI safety intensifies, the implications of this case could set important precedents for the industry at large.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Technology

Amazon is advancing plans for a Publisher Content Marketplace to enhance AI training data access, responding to growing industry demand and publisher concerns.

Top Stories

Budapest's Allonic raises $7.2M in Hungary's largest pre-seed round to revolutionize robotic body production with its innovative 3D tissue braiding technology

Top Stories

AI tools that enhance radiology workflow are now vital, with a projected physician shortage of up to 86,000 by 2036 driving demand for flexible,...

Top Stories

Trump is negotiating a compact with major tech firms, including OpenAI and Google, mandating they cover 100% of new power generation costs for AI...

Top Stories

Anthropic's 16 AI agents independently built a C compiler in Rust over two weeks, generating 100,000 lines of code and achieving 99% success on...

Top Stories

Andreessen Horowitz invests in Shizuku AI to enhance its emotional AI companion, targeting the rapidly growing personalized assistant market.

AI Tools

Anthropic's launch of Claude Opus 4.6 triggers a $10B selloff in SaaS stocks as concerns grow over its advanced AI capabilities disrupting traditional software.

AI Marketing

Super Bowl LX's AI-centric ads raised alarms among investors, echoing historical bubbles, as experts warn of a potential AI market crash following a $100M...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.