New York has emerged as the second state to implement new regulations on advanced artificial intelligence (AI) models, following California’s lead. The Responsible AI Safety and Education (RAISE) Act, set to take effect on January 1, 2027, aims to streamline existing AI regulatory frameworks rather than contribute to a fragmented landscape of state laws. This new legislation aligns closely with California’s SB-53, the first U.S. law dedicated to frontier AI, which was enacted last September.
Introduced by New York Assemblymember Alex Bores and State Senator Andrew Gounardes, the RAISE Act adopts a “trust but verify” approach that shares many of the same transparency requirements as SB-53. Both laws mandate that developers disclose their risk management frameworks and report safety incidents to state officials, offering a unified compliance pathway for AI companies operating across state lines.
Despite minor differences, the similarities between the two bills are significant. Critics who warned of a chaotic proliferation of state AI regulations may find some reassurance in this alignment. The anticipated burden of compliance has not materialized, at least in regard to frontier AI regulations, as the overlapping frameworks of California and New York mitigate the risk of conflicting state laws.
RAISE draws heavily from the text of SB-53, including definitions for terms such as catastrophic risk and foundation model, and applies its strictest requirements to AI models trained on more than 1026 FLOPS and companies with annual gross revenues exceeding $500 million. Like its Californian counterpart, the RAISE Act imposes transparency obligations that require companies to publish their safety testing methodologies and incident response plans.
Notably, RAISE extends its provisions to AI models used internally, meaning that even private applications within companies fall under its jurisdiction. This aspect is crucial as it seeks to prevent duplicative compliance burdens, allowing New York to recognize equivalent federal standards as compliance for their state requirements.
The RAISE Act was not initially intended as a direct copy of SB-53. The original draft included stricter provisions, such as a ban on deploying models that present unreasonable risks of harm and higher penalties for violations. However, after negotiations involving Governor Kathy Hochul, the text was adjusted to mirror SB-53 more closely, likely to avoid industry backlash.
While the final version of RAISE shares many tenets with SB-53, it introduces distinct features, including the establishment of the Office of Digital Innovation, Governance, Integrity, and Trust (DIGIT). This new office will oversee company reports and can initiate additional transparency requirements, diverging from SB-53, which relies on existing state agencies for enforcement.
Looking Ahead
The enactment of RAISE raises questions about potential federal preemption of state AI regulations. Although efforts to supersede state laws have faltered previously, the current administration has expressed a renewed interest in establishing federal AI rules. President Donald Trump has directed the Federal Communications Commission (FCC) to consider a reporting and disclosure standard aimed at preempting state regulations. However, the FCC has not historically governed AI developers in this manner, raising legal questions about its authority.
A leaked draft of an executive order suggested that laws like SB-53 could be deemed overly burdensome, yet subsequent versions of the order have excluded direct references to it. The alignment of RAISE with SB-53 appears to have mitigated some industry concerns, reducing the likelihood of aggressive opposition to such transparency regulations.
As states contemplate AI legislation, bipartisan interest in regulation is becoming increasingly evident. Bills similar to RAISE have begun to surface in states like Michigan and Utah. This trend suggests that a consensus on frontier AI regulation might be taking shape, countering earlier fears of a chaotic legislative environment that could stifle innovation in the AI sector.
Nevertheless, California and New York face significant challenges regarding how their “trust but verify” frameworks will be enforced. There are uncertainties about how government agencies will utilize company risk reports, as neither SB-53 nor the RAISE Act delineates a clear framework for analyzing critical safety incidents. The effectiveness of these laws will depend on the capacity of state agencies to enforce the provisions, potentially setting important precedents for AI governance across the nation.
As the regulatory landscape evolves, the balance between fostering innovation and ensuring safety will be crucial in shaping the future of AI development.
See also
AI Artist Ai TOT Reimagines 32 Bollywood Legends as Modern Runway Models
NVIDIA Launches AI Lab Robotics and Earth-2 Weather Platform, Expanding into Climate Tech
Germany”s National Team Prepares for World Cup Qualifiers with Disco Atmosphere
95% of AI Projects Fail in Companies According to MIT
AI in Food & Beverages Market to Surge from $11.08B to $263.80B by 2032



















































