Connect with us

Hi, what are you looking for?

AI Regulation

California Law Requires AI Companies to Disclose Disaster Plans and Risk Assessments

California mandates AI firms like Google and OpenAI to disclose disaster plans and risk assessments, imposing fines up to $1M for noncompliance.

Companies developing advanced artificial intelligence models will be required to enhance transparency and accountability under a new law signed by California Governor Gavin Newsom, which takes effect on January 1. This legislation, known as Senate Bill 53, aims to address the potential catastrophic risks associated with AI technologies, commonly referred to as frontier models, and introduces protections for whistleblowers working within these companies.

The law mandates that employees at firms such as Google and OpenAI who assess safety risks related to AI systems can report concerns without fear of retaliation. Furthermore, it requires developers of large AI models to publish detailed frameworks on their websites, outlining their response strategies to critical safety incidents and how they assess and manage catastrophic risks. Violations of these requirements could result in fines of up to $1 million.

Under this new statute, companies must report any critical safety incidents to the state within 15 days. If the incident poses an imminent threat of death or injury, reporting must occur within 24 hours. The law defines catastrophic risk as scenarios where AI could lead to significant harm, including the potential for over 50 deaths from a cyber attack or over $1 billion in theft or damage.

This legislation follows extensive research by a Stanford University group led by Rishi Bommasani, which highlighted the lack of transparency in the AI industry. His group found that only three out of 13 companies studied routinely perform incident reports. Bommasani’s research significantly influenced the formulation of SB 53, emphasizing that transparency is vital for public trust in AI technologies.

Bommasani stated, “You can write whatever law in theory, but the practical impact of it is heavily shaped by how you implement it, how you enforce it, and how the company is engaged with it.” He expressed hope that the enforcement of SB 53 would lead to better accountability, though he acknowledged that its success will depend on the resources allocated to the responsible government agencies.

The implications of the law extend beyond California; it has already influenced legislation in other states. New York Governor Kathy Hochul credited SB 53 as the foundation for her own AI transparency law, signed on December 19, and reports suggest efforts to align New York’s law more closely with California’s framework are underway.

However, critics argue that SB 53 is not comprehensive enough. The law does not account for various risks associated with AI, such as environmental impact or the potential for spreading misinformation and perpetuating societal biases. Additionally, it does not extend to AI systems used by government entities for profiling or scoring individuals, nor does it apply to companies generating less than $500 million in annual revenue.

Although AI developers are required to submit incident reports to the Office of Emergency Services (OES), these reports will not be accessible to the public through records requests. Instead, they will be shared with selected members of the California Legislature and the Governor, often with redactions to protect what companies may label as trade secrets.

Further transparency may be provided by Assembly Bill 2013, which will also take effect on January 1, 2024. This law requires AI companies to disclose additional information about the data used to train their models, potentially offering more insight into their operations.

Some aspects of SB 53 will not be activated until 2027, when the OES will compile a report on critical safety incidents reported by the public and large-scale AI developers. This report may shed light on the extent of AI capabilities in terms of autonomous actions and their risks to infrastructure, though it will keep the identities of specific AI models private.

As the AI landscape continues to evolve, the implementation of SB 53 marks a significant step towards greater accountability and transparency in the industry, addressing public concerns while setting a precedent for similar legislative efforts across the United States.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Generative

AI technologies significantly enhance hypertension management, with studies showing a 30% improvement in patient adherence through AI-driven interventions on platforms like WeChat.

AI Tools

Open-source AI models are democratizing machine learning by 2025, enabling startups to innovate affordably while enhancing data privacy and control.

AI Government

Japan's government will investigate sexual deepfakes under a new AI law, aiming to protect citizens' rights as generative AI misuse surges.

AI Tools

Nvidia launches the Rubin platform, cutting AI training costs by requiring fewer GPUs while enhancing inference efficiency for enterprises tackling compute shortages.

AI Education

Google introduces AI-powered podcast-style audio lessons in Google Classroom, enhancing digital education and engagement for teachers and students alike.

Top Stories

Epson integrates Google TV with Gemini in Lifestudio projectors, enhancing user experience with AI-driven content discovery and smart home control.

AI Technology

AMD launches Ryzen AI 400 processors that claim a 30% multitasking boost over Intel's Ultra 9, featuring 5.2 GHz speeds and 60 TOPS for...

Top Stories

AI expert Daniel Kokotajlo revises his timeline for superintelligence to 2034, acknowledging slower-than-expected progress in autonomous coding.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.