Connect with us

Hi, what are you looking for?

AI Regulation

New York’s AI Safety Law Introduces Stricter Oversight, Diverging from California Framework

New York’s RAISE Act mandates heightened scrutiny for AI developers spending over $100 million on training, diverging from California’s compliance model and complicating national oversight.

New York Governor Kathy Hochul signed a new AI safety law in December, positioning it as a counterpart to California’s approach and suggesting a move toward a unified regulatory framework for advanced AI in the United States. The New York Responsible AI Safety and Education (RAISE) Act aims to align closely with California’s SB 53 (the Transparency in Frontier Artificial Intelligence Act), citing shared definitions and compliance mechanisms. However, a closer examination reveals that significant differences between the two laws may exacerbate the fragmentation of state AI regulations rather than foster a cohesive national standard.

Both New York and California’s laws target “frontier” AI systems, defined as those requiring more than 10²⁶ floating-point operations (FLOPs) during training. Each mandates that developers produce formal safety documents detailing how they address “critical” or “catastrophic” harms, which include risks related to the creation of chemical, biological, radiological, or nuclear (CBRN) weapons, mass-casualty events, or economic damages exceeding $1 billion. Compliance mechanisms, such as mandatory reporting of serious incidents and protections for whistleblowers, also show parallels. This can lead to the impression that New York’s legislation is merely an extension of California’s regulatory model.

However, the RAISE Act diverges significantly in its scope and governance philosophy. While California’s law applies strict compliance obligations to firms with more than $500 million in annual revenue, New York focuses instead on the cost of a single training run. Any company spending over $100 million on model training is classified as a “large developer” and faces heightened scrutiny. This difference means that a lean startup or research entity could be subject to New York’s regulations while remaining outside California’s strictest tier.

New York’s law further expands its regulatory reach through what is termed a knowledge distillation clause. If an organization trains a more efficient model using a frontier-level one that meets the 10²⁶ FLOPs threshold, the derivative model is subject to regulations if its training cost exceeds $5 million. This aims to close perceived loopholes that could allow developers to sidestep oversight by transferring capabilities into less expensive models. However, this approach undermines the original rationale of using compute thresholds, as it suggests that computational scale may not accurately reflect the risks involved.

Proponents of the RAISE Act argue that the 10²⁶ FLOPs threshold is a simplistic measure of potential harm, asserting that risks remain even as models become smaller and cheaper. They contend that regulation needs to be iterative and adaptable, evolving alongside technological advancements. However, this creates a paradox. If New York’s inclusion of distilled models is justified, it implies that California’s approach is flawed, and vice versa. This divergence means that states are enforcing their own regulatory versions rather than collaboratively improving a unified framework. The result is increased obstacles to innovation as companies navigate a labyrinth of conflicting regulations.

Additionally, the laws reveal stark differences in governance philosophies. California’s SB 53 operates under a “trust but verify” model, allowing developers to create and publish their own safety frameworks while requiring them to annually submit risk summaries to state authorities. This approach acknowledges the role of public and market pressures in managing risks without stifling innovation.

Conversely, New York’s RAISE Act embodies a “suspect and inspect” philosophy, necessitating that developers maintain stringent safety protocols and offer state agencies unredacted access to their materials upon request. This leads to a fundamentally different relationship between developers and the state, shifting accountability from transparency to constant governmental scrutiny. Critics argue that state agencies lack the capacity to continuously evaluate rapidly evolving AI systems, creating an institutional mismatch that could hinder effective oversight.

Governor Hochul argues that New York’s law enhances national alignment in AI safety. Yet, in reality, it may exacerbate the fragmentation of state regulations by embedding different thresholds, triggers, and oversight mechanisms. While common language may create an illusion of unity, the reality is that states are entrenching competing regulatory frameworks. This dissonance complicates the path toward a coherent, innovation-friendly approach to AI safety.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Generative

Synthetic media's rise amid U.S.-Israel-Iran tensions fuels disinformation, complicating conflict narratives and undermining public trust in media accuracy

Top Stories

Minneapolis City Council proposes legalizing bathhouses to enhance LGBTQ+ health and safety, with a focus on consent and community input amid rising public interest.

AI Education

Recent court rulings hold Meta and YouTube liable for children's social media addiction, prompting educators to pause AI adoption in classrooms amid rising mental...

AI Research

Google DeepMind recruits PhD students for six to nine-month AI research roles in cancer discovery, enhancing biomedical research capabilities starting May 2026.

AI Government

California's Executive Order N-5-26 mandates new AI certification for state contractors, requiring compliance measures within 120 days to ensure ethical GenAI use.

AI Regulation

White House unveils a national AI regulatory framework, preempting state laws amid a surge of 250 bills in 40 states, aiming for cohesive governance...

AI Cybersecurity

Lunai Bioworks partners with BioSymetrics to enhance AI-driven chemical threat detection, leveraging advanced phenotypic screening to classify neurotoxic compounds.

AI Regulation

California's Parents & Kids Safe AI Act mandates robust age assurance and privacy protections for youth, with 80–90% voter support for safeguarding children from...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.