Companies developing advanced artificial intelligence models will be required to enhance transparency and accountability under a new law signed by California Governor Gavin Newsom, which takes effect on January 1. This legislation, known as Senate Bill 53, aims to address the potential catastrophic risks associated with AI technologies, commonly referred to as frontier models, and introduces protections for whistleblowers working within these companies.
The law mandates that employees at firms such as Google and OpenAI who assess safety risks related to AI systems can report concerns without fear of retaliation. Furthermore, it requires developers of large AI models to publish detailed frameworks on their websites, outlining their response strategies to critical safety incidents and how they assess and manage catastrophic risks. Violations of these requirements could result in fines of up to $1 million.
Under this new statute, companies must report any critical safety incidents to the state within 15 days. If the incident poses an imminent threat of death or injury, reporting must occur within 24 hours. The law defines catastrophic risk as scenarios where AI could lead to significant harm, including the potential for over 50 deaths from a cyber attack or over $1 billion in theft or damage.
This legislation follows extensive research by a Stanford University group led by Rishi Bommasani, which highlighted the lack of transparency in the AI industry. His group found that only three out of 13 companies studied routinely perform incident reports. Bommasani’s research significantly influenced the formulation of SB 53, emphasizing that transparency is vital for public trust in AI technologies.
Bommasani stated, “You can write whatever law in theory, but the practical impact of it is heavily shaped by how you implement it, how you enforce it, and how the company is engaged with it.” He expressed hope that the enforcement of SB 53 would lead to better accountability, though he acknowledged that its success will depend on the resources allocated to the responsible government agencies.
The implications of the law extend beyond California; it has already influenced legislation in other states. New York Governor Kathy Hochul credited SB 53 as the foundation for her own AI transparency law, signed on December 19, and reports suggest efforts to align New York’s law more closely with California’s framework are underway.
However, critics argue that SB 53 is not comprehensive enough. The law does not account for various risks associated with AI, such as environmental impact or the potential for spreading misinformation and perpetuating societal biases. Additionally, it does not extend to AI systems used by government entities for profiling or scoring individuals, nor does it apply to companies generating less than $500 million in annual revenue.
Although AI developers are required to submit incident reports to the Office of Emergency Services (OES), these reports will not be accessible to the public through records requests. Instead, they will be shared with selected members of the California Legislature and the Governor, often with redactions to protect what companies may label as trade secrets.
Further transparency may be provided by Assembly Bill 2013, which will also take effect on January 1, 2024. This law requires AI companies to disclose additional information about the data used to train their models, potentially offering more insight into their operations.
Some aspects of SB 53 will not be activated until 2027, when the OES will compile a report on critical safety incidents reported by the public and large-scale AI developers. This report may shed light on the extent of AI capabilities in terms of autonomous actions and their risks to infrastructure, though it will keep the identities of specific AI models private.
As the AI landscape continues to evolve, the implementation of SB 53 marks a significant step towards greater accountability and transparency in the industry, addressing public concerns while setting a precedent for similar legislative efforts across the United States.
See also
India Orders X to Revise Grok AI Tool Following Obscene Content Complaints
BRICS Leaders Call for Global Governance Framework to Ensure Safe AI Development
AI Distrust Soars in 2026: 75% of Americans Fear Job Losses and Ethical Failures
UK AI Expert Warns: Rapid Advances May Outpace Safety Measures, Threatening Control
India Prepares for 2026 AI Impact Summit to Drive Global South Collaboration and Growth


















































