New York Governor Kathy Hochul signed the Responsible AI Safety and Education (RAISE) Act into law on Friday, establishing a framework intended to enhance safety and transparency for frontier AI models. The legislation mandates that large AI developers disclose their safety protocols and report any incidents related to model safety to the state within 72 hours of awareness. The law is set to take effect on January 1, 2027.
In her announcement, Hochul stated, “By enacting the RAISE Act, New York is once again leading the nation in setting a strong and sensible standard for frontier AI safety, holding the biggest developers accountable for their safety and transparency protocols.” The law aims to address growing concerns regarding the implications of AI technologies, following a surge in public interest and regulatory scrutiny.
The introduction of the RAISE Act represents a direct response to an executive order issued by former President Donald Trump earlier this month, which challenged the authority of states to regulate AI technologies. Hochul characterized the legislation as a “nation-leading approach to AI safety,” specifically noting its inspiration from California’s recently enacted Transparency in Frontier Artificial Intelligence Act.
“This law builds on California’s recently adopted framework, creating a unified benchmark among the country’s leading tech states as the federal government lags behind, failing to implement common-sense regulations that protect the public,” she added. The RAISE Act is the first state-level AI regulation enacted since Trump’s order, which aims to establish an AI Litigation Task Force to contest state laws that are deemed to interfere with existing federal regulations or “unconstitutionally regulate interstate commerce.”
New York’s RAISE Act will require prominent AI developers, including companies like OpenAI and Anthropic, to document their technical and organizational safety protocols as well as their testing and evaluation procedures. Furthermore, developers must appoint a senior personnel member responsible for compliance with these regulations. The law will also impose a reporting requirement for incidents such as unintended behaviors by AI models, technical failures, or security breaches.
Should developers fail to comply with the reporting requirements or provide false information, they could face civil penalties reaching up to $1 million for initial violations and up to $3 million for subsequent infractions. State Senator Andrew Gounardes remarked, “Big tech oligarchs think it’s fine to put their profits ahead of our safety — we disagree. With this law, we make clear that tech innovation and safety don’t have to be at odds.”
The RAISE Act aims to create a sense of accountability among AI developers, reflecting a national trend toward increased regulation of technology in response to public concerns. As states like New York and California set these precedents, it could pave the way for a more cohesive regulatory landscape, especially as the federal government continues to grapple with the complexities of AI governance.
As the dialogue around AI regulation continues, the implications of the RAISE Act may influence how other states approach similar legislation, especially in light of ongoing challenges posed by federal oversight. The contrasting positions on AI regulation between state and federal levels will likely fuel further debate on the future of technological governance in the United States.
See also
Mississippi Lawyer Fined $20K for AI Hallucinations in Legal Documents
Trump Prepares Executive Order to Centralize AI Regulation, Blocking State Laws
Trump’s Health Agencies Accelerate AI Adoption, Reducing Patient Protections
AI Transforms US Immigration Compliance: Departments Enhance Workflows with Continuous Vetting
HHS Unveils AI Rules for Healthcare: Key Insights for Leaders by February 2026


















































