Connect with us

Hi, what are you looking for?

AI Regulation

California Law Requires AI Companies to Disclose Disaster Plans and Risk Assessments

California mandates AI firms like Google and OpenAI to disclose disaster plans and risk assessments, imposing fines up to $1M for noncompliance.

Companies developing advanced artificial intelligence models will be required to enhance transparency and accountability under a new law signed by California Governor Gavin Newsom, which takes effect on January 1. This legislation, known as Senate Bill 53, aims to address the potential catastrophic risks associated with AI technologies, commonly referred to as frontier models, and introduces protections for whistleblowers working within these companies.

The law mandates that employees at firms such as Google and OpenAI who assess safety risks related to AI systems can report concerns without fear of retaliation. Furthermore, it requires developers of large AI models to publish detailed frameworks on their websites, outlining their response strategies to critical safety incidents and how they assess and manage catastrophic risks. Violations of these requirements could result in fines of up to $1 million.

Under this new statute, companies must report any critical safety incidents to the state within 15 days. If the incident poses an imminent threat of death or injury, reporting must occur within 24 hours. The law defines catastrophic risk as scenarios where AI could lead to significant harm, including the potential for over 50 deaths from a cyber attack or over $1 billion in theft or damage.

This legislation follows extensive research by a Stanford University group led by Rishi Bommasani, which highlighted the lack of transparency in the AI industry. His group found that only three out of 13 companies studied routinely perform incident reports. Bommasani’s research significantly influenced the formulation of SB 53, emphasizing that transparency is vital for public trust in AI technologies.

Bommasani stated, “You can write whatever law in theory, but the practical impact of it is heavily shaped by how you implement it, how you enforce it, and how the company is engaged with it.” He expressed hope that the enforcement of SB 53 would lead to better accountability, though he acknowledged that its success will depend on the resources allocated to the responsible government agencies.

The implications of the law extend beyond California; it has already influenced legislation in other states. New York Governor Kathy Hochul credited SB 53 as the foundation for her own AI transparency law, signed on December 19, and reports suggest efforts to align New York’s law more closely with California’s framework are underway.

However, critics argue that SB 53 is not comprehensive enough. The law does not account for various risks associated with AI, such as environmental impact or the potential for spreading misinformation and perpetuating societal biases. Additionally, it does not extend to AI systems used by government entities for profiling or scoring individuals, nor does it apply to companies generating less than $500 million in annual revenue.

Although AI developers are required to submit incident reports to the Office of Emergency Services (OES), these reports will not be accessible to the public through records requests. Instead, they will be shared with selected members of the California Legislature and the Governor, often with redactions to protect what companies may label as trade secrets.

Further transparency may be provided by Assembly Bill 2013, which will also take effect on January 1, 2024. This law requires AI companies to disclose additional information about the data used to train their models, potentially offering more insight into their operations.

Some aspects of SB 53 will not be activated until 2027, when the OES will compile a report on critical safety incidents reported by the public and large-scale AI developers. This report may shed light on the extent of AI capabilities in terms of autonomous actions and their risks to infrastructure, though it will keep the identities of specific AI models private.

As the AI landscape continues to evolve, the implementation of SB 53 marks a significant step towards greater accountability and transparency in the industry, addressing public concerns while setting a precedent for similar legislative efforts across the United States.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Marketing

Reddit captures over 9% of AI citations, compelling brands to overhaul AEO strategies and engage authentically in community-driven discourse.

AI Cybersecurity

New analysis warns that Anthropic's Mythos AI tool could empower cyberattacks on small businesses, making them vulnerable to exploitation by advanced AI threats.

AI Generative

OpenAI's experiments reveal GPT-4.1 models can subliminally learn traits, boosting affinity for specific preferences from 12% to over 60% through data filtering.

Top Stories

BridgeWise reports 78.3% of global investors now leverage AI for decision-making, signaling a paradigm shift in wealth management strategies.

Top Stories

Microsoft acquires 30,000 Nvidia GPU slots in Norway and 3,200 acres in Wyoming, enhancing Azure's AI infrastructure amid rising demand.

AI Generative

Microsoft launches MAI-Image-2, ranking third on Arena.ai with advanced photorealism and text generation, but faces significant usage limitations.

AI Regulation

GSEs Fannie Mae and Freddie Mac introduce new AI governance rules for lenders, demanding compliance with ethical standards to enhance accountability and transparency.

AI Generative

InVideo launches an AI video generator powered by over 200 models, enabling complete video creation for just $28 a month, streamlining content production for...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.