Connect with us

Hi, what are you looking for?

AI Regulation

New York’s AI Safety Law Introduces Stricter Oversight, Diverging from California Framework

New York’s RAISE Act mandates heightened scrutiny for AI developers spending over $100 million on training, diverging from California’s compliance model and complicating national oversight.

New York Governor Kathy Hochul signed a new AI safety law in December, positioning it as a counterpart to California’s approach and suggesting a move toward a unified regulatory framework for advanced AI in the United States. The New York Responsible AI Safety and Education (RAISE) Act aims to align closely with California’s SB 53 (the Transparency in Frontier Artificial Intelligence Act), citing shared definitions and compliance mechanisms. However, a closer examination reveals that significant differences between the two laws may exacerbate the fragmentation of state AI regulations rather than foster a cohesive national standard.

Both New York and California’s laws target “frontier” AI systems, defined as those requiring more than 10²⁶ floating-point operations (FLOPs) during training. Each mandates that developers produce formal safety documents detailing how they address “critical” or “catastrophic” harms, which include risks related to the creation of chemical, biological, radiological, or nuclear (CBRN) weapons, mass-casualty events, or economic damages exceeding $1 billion. Compliance mechanisms, such as mandatory reporting of serious incidents and protections for whistleblowers, also show parallels. This can lead to the impression that New York’s legislation is merely an extension of California’s regulatory model.

However, the RAISE Act diverges significantly in its scope and governance philosophy. While California’s law applies strict compliance obligations to firms with more than $500 million in annual revenue, New York focuses instead on the cost of a single training run. Any company spending over $100 million on model training is classified as a “large developer” and faces heightened scrutiny. This difference means that a lean startup or research entity could be subject to New York’s regulations while remaining outside California’s strictest tier.

New York’s law further expands its regulatory reach through what is termed a knowledge distillation clause. If an organization trains a more efficient model using a frontier-level one that meets the 10²⁶ FLOPs threshold, the derivative model is subject to regulations if its training cost exceeds $5 million. This aims to close perceived loopholes that could allow developers to sidestep oversight by transferring capabilities into less expensive models. However, this approach undermines the original rationale of using compute thresholds, as it suggests that computational scale may not accurately reflect the risks involved.

Proponents of the RAISE Act argue that the 10²⁶ FLOPs threshold is a simplistic measure of potential harm, asserting that risks remain even as models become smaller and cheaper. They contend that regulation needs to be iterative and adaptable, evolving alongside technological advancements. However, this creates a paradox. If New York’s inclusion of distilled models is justified, it implies that California’s approach is flawed, and vice versa. This divergence means that states are enforcing their own regulatory versions rather than collaboratively improving a unified framework. The result is increased obstacles to innovation as companies navigate a labyrinth of conflicting regulations.

Additionally, the laws reveal stark differences in governance philosophies. California’s SB 53 operates under a “trust but verify” model, allowing developers to create and publish their own safety frameworks while requiring them to annually submit risk summaries to state authorities. This approach acknowledges the role of public and market pressures in managing risks without stifling innovation.

Conversely, New York’s RAISE Act embodies a “suspect and inspect” philosophy, necessitating that developers maintain stringent safety protocols and offer state agencies unredacted access to their materials upon request. This leads to a fundamentally different relationship between developers and the state, shifting accountability from transparency to constant governmental scrutiny. Critics argue that state agencies lack the capacity to continuously evaluate rapidly evolving AI systems, creating an institutional mismatch that could hinder effective oversight.

Governor Hochul argues that New York’s law enhances national alignment in AI safety. Yet, in reality, it may exacerbate the fragmentation of state regulations by embedding different thresholds, triggers, and oversight mechanisms. While common language may create an illusion of unity, the reality is that states are entrenching competing regulatory frameworks. This dissonance complicates the path toward a coherent, innovation-friendly approach to AI safety.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

Runway AI faces a proposed class action lawsuit for allegedly scraping YouTube videos to train its generative AI models, raising critical copyright concerns.

AI Regulation

Modus Create study reveals 84% of companies claim AI integration, yet only 28% utilize it for prototyping, highlighting a significant execution gap.

AI Technology

Finland's IQM prepares for a public listing, aiming to lead Europe in quantum computing with 21 systems delivered to 13 clients since 2018.

AI Regulation

Over 200 bills targeting data centers emerged across all 50 states in 2025, reshaping regulations on energy, water, and environmental concerns amid the AI...

AI Research

U.S. energy demands for AI data centers could surge by 47 GW by 2030, necessitating a renewed focus on nuclear power's pivotal role in...

AI Generative

OpenAI faces defamation lawsuits in multiple countries, as generative AI's false outputs provoke significant legal challenges and reputational risks for public figures.

AI Cybersecurity

Anthropic's Claude Code Security uncovers over 500 vulnerabilities, triggering sharp declines in cybersecurity stocks like JFrog by 24% and CrowdStrike by 10%

AI Government

Human Rights Watch's Philippe Bolopion warns that AI's misuse by autocratic regimes poses a growing threat to global human rights amid worsening democratic decline.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.