New York Governor Kathy Hochul signed the Responsible AI Safety and Education Act (RAISE Act) into law, establishing a regulatory framework for organizations developing or deploying frontier AI models. The law will take effect on January 1, 2027, and comes in the wake of President Trump’s executive order aimed at curbing state-level AI regulations. The RAISE Act introduces enforceable compliance obligations, civil penalties, and the establishment of a new oversight office within the state’s Department of Financial Services (DFS).
Following chapter amendments set to be enacted in January 2026, the law will apply to developers of “frontier models” with annual revenues exceeding $500 million. Frontier models are defined as AI systems that utilize more than 10²⁶ computational operations (FLOPs) and incur compute costs greater than $100 million. This includes models developed through “knowledge distillation,” wherein a smaller AI model is trained using the output of a larger one.
The revenue threshold aligns New York’s regulations with California’s Transparency in Frontier Artificial Intelligence Act (TFAIA), creating a “unified benchmark among the country’s leading tech states,” according to Hochul’s office. While the original statute employed compute-cost thresholds, the revised framework focuses on revenue to better harmonize with California’s legislative approach. Notably, accredited colleges and universities engaged in academic research are exempt from these regulations, provided they do not transfer intellectual property rights to commercial entities.
Under the RAISE Act, developers must implement and publish detailed safety and security protocols before deploying a frontier model. These protocols must reduce the risk of “critical harm,” which is defined as the potential for death or serious injury to 100 or more people or at least $1 billion in damages. Developers are also required to describe the testing procedures used to assess the risk of such harm, designate personnel responsible for compliance, and provide copies of their protocols to the Division of Homeland Security and Emergency Services (DHSES) and the New York Attorney General.
Furthermore, developers must conduct annual reviews of these safety protocols, retaining documentation of testing methods and results to enable third-party verification. While the law initially mandated annual independent audits, it remains unclear whether this requirement will be retained after the amendments.
Reporting obligations are stringent, with developers required to report any “safety incident” to DHSES within 72 hours of becoming aware of the incident. This includes occurrences of critical harm or any event that demonstrates an increased risk of such harm. The reports must outline the incident date and provide a clear description of the event. The statute also prohibits developers from making false or misleading statements in compliance documents and protects employees from retaliation for reporting potential risks.
The establishment of the DFS oversight office marks a significant deviation from California’s approach, where oversight is managed by the Office of Emergency Services. The DFS is known for its rigorous enforcement of cybersecurity regulations, particularly through its Part 500 Cybersecurity Regulation for financial institutions. Developers should prepare for similar scrutiny, including extensive document requests and potential enforcement actions that could require operational changes.
Additionally, organizations that have utilized knowledge distillation from large models such as GPT-4 or Claude should determine whether they meet the revenue threshold for compliance, regardless of their direct training costs. Knowledge distillation alone does not trigger coverage, but if revenue exceeds $500 million and the model has been distilled from frontier models, it falls under the law.
Enforcement authority is vested in the Attorney General, who can impose civil penalties of up to $1 million for first violations and up to $3 million for subsequent violations, a reduction from earlier proposed penalties of $10 million and $30 million. The Attorney General may also pursue injunctive or declaratory relief against non-compliant developers.
For organizations deploying frontier models, it is crucial to update vendor contracts to address incident notification requirements and reference published safety protocols. This proactive approach fosters a clearer understanding of operational risks linked to regulatory compliance.
As the federal government evaluates state laws, potential constitutional challenges to the RAISE Act loom. The Commerce Department’s assessment may identify New York’s legislation as problematic, raising concerns about interstate commerce regulation, potential preemption by federal laws, and First Amendment implications regarding disclosure requirements. Until federal courts adjudicate these issues, the law will remain enforceable as of its effective date, compelling organizations to prepare compliance frameworks despite the potential for future legal challenges.
See also
DeSantis Asserts Florida’s Right to Regulate AI Despite Trump’s Executive Order
Alaska Court System’s AI Chatbot AVA Faces Yearlong Delays Amid Accuracy Concerns
India Orders X to Address Grok’s Obscene AI Content Within 72 Hours
European Banks Cut 200,000 Jobs by 2030 as AI Drives Efficiency Gains
AI Bubble Concerns and Diverging Policies Shape 2026 Asia Stocks Landscape




















































