Connect with us

Hi, what are you looking for?

AI Regulation

New York’s AI Safety Law Introduces Stricter Oversight, Diverging from California Framework

New York’s RAISE Act mandates heightened scrutiny for AI developers spending over $100 million on training, diverging from California’s compliance model and complicating national oversight.

New York Governor Kathy Hochul signed a new AI safety law in December, positioning it as a counterpart to California’s approach and suggesting a move toward a unified regulatory framework for advanced AI in the United States. The New York Responsible AI Safety and Education (RAISE) Act aims to align closely with California’s SB 53 (the Transparency in Frontier Artificial Intelligence Act), citing shared definitions and compliance mechanisms. However, a closer examination reveals that significant differences between the two laws may exacerbate the fragmentation of state AI regulations rather than foster a cohesive national standard.

Both New York and California’s laws target “frontier” AI systems, defined as those requiring more than 10²⁶ floating-point operations (FLOPs) during training. Each mandates that developers produce formal safety documents detailing how they address “critical” or “catastrophic” harms, which include risks related to the creation of chemical, biological, radiological, or nuclear (CBRN) weapons, mass-casualty events, or economic damages exceeding $1 billion. Compliance mechanisms, such as mandatory reporting of serious incidents and protections for whistleblowers, also show parallels. This can lead to the impression that New York’s legislation is merely an extension of California’s regulatory model.

However, the RAISE Act diverges significantly in its scope and governance philosophy. While California’s law applies strict compliance obligations to firms with more than $500 million in annual revenue, New York focuses instead on the cost of a single training run. Any company spending over $100 million on model training is classified as a “large developer” and faces heightened scrutiny. This difference means that a lean startup or research entity could be subject to New York’s regulations while remaining outside California’s strictest tier.

New York’s law further expands its regulatory reach through what is termed a knowledge distillation clause. If an organization trains a more efficient model using a frontier-level one that meets the 10²⁶ FLOPs threshold, the derivative model is subject to regulations if its training cost exceeds $5 million. This aims to close perceived loopholes that could allow developers to sidestep oversight by transferring capabilities into less expensive models. However, this approach undermines the original rationale of using compute thresholds, as it suggests that computational scale may not accurately reflect the risks involved.

Proponents of the RAISE Act argue that the 10²⁶ FLOPs threshold is a simplistic measure of potential harm, asserting that risks remain even as models become smaller and cheaper. They contend that regulation needs to be iterative and adaptable, evolving alongside technological advancements. However, this creates a paradox. If New York’s inclusion of distilled models is justified, it implies that California’s approach is flawed, and vice versa. This divergence means that states are enforcing their own regulatory versions rather than collaboratively improving a unified framework. The result is increased obstacles to innovation as companies navigate a labyrinth of conflicting regulations.

Additionally, the laws reveal stark differences in governance philosophies. California’s SB 53 operates under a “trust but verify” model, allowing developers to create and publish their own safety frameworks while requiring them to annually submit risk summaries to state authorities. This approach acknowledges the role of public and market pressures in managing risks without stifling innovation.

Conversely, New York’s RAISE Act embodies a “suspect and inspect” philosophy, necessitating that developers maintain stringent safety protocols and offer state agencies unredacted access to their materials upon request. This leads to a fundamentally different relationship between developers and the state, shifting accountability from transparency to constant governmental scrutiny. Critics argue that state agencies lack the capacity to continuously evaluate rapidly evolving AI systems, creating an institutional mismatch that could hinder effective oversight.

Governor Hochul argues that New York’s law enhances national alignment in AI safety. Yet, in reality, it may exacerbate the fragmentation of state regulations by embedding different thresholds, triggers, and oversight mechanisms. While common language may create an illusion of unity, the reality is that states are entrenching competing regulatory frameworks. This dissonance complicates the path toward a coherent, innovation-friendly approach to AI safety.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

Microsoft unveils agentic AI solutions for retail, including Copilot Checkout, streamlining operations and enhancing customer engagement across the sector.

AI Education

GSV Cup selects 50 innovative EdTech startups from 3,000 global nominations, raising over $177 million and highlighting diverse leadership with 66% underrepresented founders.

AI Technology

As U.S. data center electricity demand is projected to exceed 426 TWh by 2030, experts warn that energy bottlenecks could hinder America's AI leadership...

Top Stories

DTCC's survey reveals 78% of financial leaders cite geopolitical risks as top threats, outpacing cyber risks at 63%, ahead of 2026's challenges.

Top Stories

Biden's executive order challenges state AI regulations, establishing a national framework to streamline compliance and promote innovation across the sector.

Top Stories

Google enhances Gmail with AI features like personalized email suggestions and to-do lists for over 3 billion users, transforming it into a personal assistant.

Top Stories

Google and Character.AI settle a landmark lawsuit linked to a teenager's suicide, raising critical ethical concerns about AI chatbot interactions with minors.

AI Regulation

Senator Marsha Blackburn introduces a pivotal AI regulation proposal in 2026, aiming to establish a national framework addressing privacy and ethical concerns.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.