New York has officially joined California in leading the charge for artificial intelligence regulation in the United States. On Friday, Governor Kathy Hochul signed a revised version of the Responsible AI Safety and Education Act, known as the RAISE Act, which establishes stringent safety obligations for developers of advanced AI systems. While the final version of the law is less punitive than its June predecessor, it represents a significant development in state-level AI safety legislation. Businesses should prepare for its implementation, set to take effect on January 1, 2027, by understanding its implications and requirements.
The RAISE Act focuses on mitigating catastrophic risks associated with highly capable AI systems, distinguishing it from other state laws that emphasize bias and consumer protection. The law specifically targets “frontier models,” which are the most advanced AI systems capable of causing serious harm, including cyberattacks and infrastructure damage. It applies to AI developers with annual revenues exceeding $500 million and those operating frontier AI models in New York.
One key aspect of the RAISE Act is its core safety obligations, which include four mandatory requirements for covered companies. First, developers must create and adhere to written safety protocols and conduct thorough assessments of potential risks that their AI systems could pose. This includes implementing safeguards to prevent or mitigate risks of “critical harm” to individuals or property.
Second, AI developers are required to report critical safety incidents to the state within 72 hours of their occurrence, a significantly shorter timeframe than California’s 15-day reporting period. This rapid notification is aimed at ensuring prompt action in the event of safety breaches.
Third, the law establishes a new AI oversight office within the New York Department of Financial Services (DFS). This office will oversee registration for covered developers, collect fees for regulatory support, issue regulations, and publish annual reports detailing AI safety risks. Lastly, enforcement of the law will fall to the New York Attorney General, who can impose penalties of up to $1 million for first violations and up to $3 million for subsequent violations. Notably, the law does not provide a private right of action for individuals seeking to file lawsuits.
The RAISE Act’s effective date allows time for regulators to establish the oversight framework and gives companies time to prepare for compliance. Businesses must remain vigilant, as the regulatory landscape continues to evolve, especially in light of recent federal challenges to state-level AI regulations. President Trump’s executive order authorizing federal lawsuits against states with AI laws perceived as stifling innovation could complicate the regulatory environment, while some Congressional Republicans are advocating for proposals that would limit state-level AI regulation.
As the industry anticipates these developments, companies are advised to take proactive steps in preparation. Even organizations not directly developing frontier AI models should keep abreast of the RAISE Act and use 2026 to implement necessary measures. This may include conducting vendor diligence to assess whether AI models utilized by third parties fall under the law’s definitions and understanding how they manage safety risks. Additionally, businesses should consider incorporating AI safety disclosures into procurement agreements and maintaining clear internal governance policies to navigate the evolving regulatory landscape.
New York and California are setting early benchmarks in AI regulation, and their models may influence other states as they formulate their own frameworks. Consequently, businesses operating across multiple states should be prepared for a patchwork of compliance requirements. As federal intervention looms, the potential for a comprehensive national approach remains uncertain, but it is essential for companies to stay informed and adaptable.
See also
California Finalizes CCPA Regulations: Key Insights on AI Decision-Making and Risk Assessments
Trump’s New AI Order Aims to Federalize Regulation, Empower Industry Growth
NVIDIA Gains After Groq AI Inference Deal; Stock Steady at $190 Amid Geopolitical Concerns
AI-Powered Tax Tools Simplify Expat Compliance, Reducing Filing Stress by 70%
Texas Lawmakers Defy Trump’s AI Regulation Order, Seek $3.3B Broadband Funding



















































