Connect with us

Hi, what are you looking for?

Top Stories

New York’s RAISE Act Mirrors California’s AI Law, Setting Precedent for National Standards

New York’s RAISE Act, mirroring California’s SB-53, mandates AI developers disclose risk management frameworks and safety incidents, impacting companies grossing over $500M.

New York has emerged as the second state to implement new regulations on advanced artificial intelligence (AI) models, following California’s lead. The Responsible AI Safety and Education (RAISE) Act, set to take effect on January 1, 2027, aims to streamline existing AI regulatory frameworks rather than contribute to a fragmented landscape of state laws. This new legislation aligns closely with California’s SB-53, the first U.S. law dedicated to frontier AI, which was enacted last September.

Introduced by New York Assemblymember Alex Bores and State Senator Andrew Gounardes, the RAISE Act adopts a “trust but verify” approach that shares many of the same transparency requirements as SB-53. Both laws mandate that developers disclose their risk management frameworks and report safety incidents to state officials, offering a unified compliance pathway for AI companies operating across state lines.

Despite minor differences, the similarities between the two bills are significant. Critics who warned of a chaotic proliferation of state AI regulations may find some reassurance in this alignment. The anticipated burden of compliance has not materialized, at least in regard to frontier AI regulations, as the overlapping frameworks of California and New York mitigate the risk of conflicting state laws.

RAISE draws heavily from the text of SB-53, including definitions for terms such as catastrophic risk and foundation model, and applies its strictest requirements to AI models trained on more than 1026 FLOPS and companies with annual gross revenues exceeding $500 million. Like its Californian counterpart, the RAISE Act imposes transparency obligations that require companies to publish their safety testing methodologies and incident response plans.

Notably, RAISE extends its provisions to AI models used internally, meaning that even private applications within companies fall under its jurisdiction. This aspect is crucial as it seeks to prevent duplicative compliance burdens, allowing New York to recognize equivalent federal standards as compliance for their state requirements.

The RAISE Act was not initially intended as a direct copy of SB-53. The original draft included stricter provisions, such as a ban on deploying models that present unreasonable risks of harm and higher penalties for violations. However, after negotiations involving Governor Kathy Hochul, the text was adjusted to mirror SB-53 more closely, likely to avoid industry backlash.

While the final version of RAISE shares many tenets with SB-53, it introduces distinct features, including the establishment of the Office of Digital Innovation, Governance, Integrity, and Trust (DIGIT). This new office will oversee company reports and can initiate additional transparency requirements, diverging from SB-53, which relies on existing state agencies for enforcement.

Looking Ahead

The enactment of RAISE raises questions about potential federal preemption of state AI regulations. Although efforts to supersede state laws have faltered previously, the current administration has expressed a renewed interest in establishing federal AI rules. President Donald Trump has directed the Federal Communications Commission (FCC) to consider a reporting and disclosure standard aimed at preempting state regulations. However, the FCC has not historically governed AI developers in this manner, raising legal questions about its authority.

A leaked draft of an executive order suggested that laws like SB-53 could be deemed overly burdensome, yet subsequent versions of the order have excluded direct references to it. The alignment of RAISE with SB-53 appears to have mitigated some industry concerns, reducing the likelihood of aggressive opposition to such transparency regulations.

As states contemplate AI legislation, bipartisan interest in regulation is becoming increasingly evident. Bills similar to RAISE have begun to surface in states like Michigan and Utah. This trend suggests that a consensus on frontier AI regulation might be taking shape, countering earlier fears of a chaotic legislative environment that could stifle innovation in the AI sector.

Nevertheless, California and New York face significant challenges regarding how their “trust but verify” frameworks will be enforced. There are uncertainties about how government agencies will utilize company risk reports, as neither SB-53 nor the RAISE Act delineates a clear framework for analyzing critical safety incidents. The effectiveness of these laws will depend on the capacity of state agencies to enforce the provisions, potentially setting important precedents for AI governance across the nation.

As the regulatory landscape evolves, the balance between fostering innovation and ensuring safety will be crucial in shaping the future of AI development.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Research

Nokia shares fell 3.6% after dual analyst downgrades to "Hold" at €6.50, despite a breakthrough in medical AI involving wound healing innovation.

AI Tools

SentinelOne and Snyk unveil advanced AI security tools, including Prompt AI Agent Security, to tackle growing cyber risks and enhance data protection for AI...

AI Research

Oomiji's report forecasts a dramatic shift in marketing, projecting that 45% of agency roles may vanish by 2030 as AI-driven services reach $220 billion.

AI Education

White House unveils a National AI Policy Framework to integrate AI skills into existing workforce training, aiming to enhance job readiness and national regulation.

Top Stories

Nvidia faces antitrust scrutiny from U.S. lawmakers over its $20 billion licensing deal with Groq, raising concerns about competition in AI computing.

AI Regulation

Trump administration proposes a national AI policy to streamline innovation and limit state regulations, led by advisor David Sacks amid industry support and criticism.

AI Cybersecurity

Cybersecurity leaders anticipate a dramatic 48% increase in AI budgets over the next two years to combat the rising threat of AI-enabled cyber attacks.

AI Regulation

Trump administration issues new guidance to limit state-level AI regulations, asserting federal dominance to boost U.S. competitiveness against China.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.