Connect with us

Hi, what are you looking for?

AI Regulation

New York’s RAISE Act Mandates $500M Revenue Threshold for AI Compliance by 2027

New York’s RAISE Act mandates AI developers with over $500M in revenue to comply with stringent safety protocols by 2027, enforcing civil penalties up to $3M.

New York Governor Kathy Hochul signed the Responsible AI Safety and Education Act (RAISE Act) into law, establishing a regulatory framework for organizations developing or deploying frontier AI models. The law will take effect on January 1, 2027, and comes in the wake of President Trump’s executive order aimed at curbing state-level AI regulations. The RAISE Act introduces enforceable compliance obligations, civil penalties, and the establishment of a new oversight office within the state’s Department of Financial Services (DFS).

Following chapter amendments set to be enacted in January 2026, the law will apply to developers of “frontier models” with annual revenues exceeding $500 million. Frontier models are defined as AI systems that utilize more than 10²⁶ computational operations (FLOPs) and incur compute costs greater than $100 million. This includes models developed through “knowledge distillation,” wherein a smaller AI model is trained using the output of a larger one.

The revenue threshold aligns New York’s regulations with California’s Transparency in Frontier Artificial Intelligence Act (TFAIA), creating a “unified benchmark among the country’s leading tech states,” according to Hochul’s office. While the original statute employed compute-cost thresholds, the revised framework focuses on revenue to better harmonize with California’s legislative approach. Notably, accredited colleges and universities engaged in academic research are exempt from these regulations, provided they do not transfer intellectual property rights to commercial entities.

Under the RAISE Act, developers must implement and publish detailed safety and security protocols before deploying a frontier model. These protocols must reduce the risk of “critical harm,” which is defined as the potential for death or serious injury to 100 or more people or at least $1 billion in damages. Developers are also required to describe the testing procedures used to assess the risk of such harm, designate personnel responsible for compliance, and provide copies of their protocols to the Division of Homeland Security and Emergency Services (DHSES) and the New York Attorney General.

Furthermore, developers must conduct annual reviews of these safety protocols, retaining documentation of testing methods and results to enable third-party verification. While the law initially mandated annual independent audits, it remains unclear whether this requirement will be retained after the amendments.

Reporting obligations are stringent, with developers required to report any “safety incident” to DHSES within 72 hours of becoming aware of the incident. This includes occurrences of critical harm or any event that demonstrates an increased risk of such harm. The reports must outline the incident date and provide a clear description of the event. The statute also prohibits developers from making false or misleading statements in compliance documents and protects employees from retaliation for reporting potential risks.

The establishment of the DFS oversight office marks a significant deviation from California’s approach, where oversight is managed by the Office of Emergency Services. The DFS is known for its rigorous enforcement of cybersecurity regulations, particularly through its Part 500 Cybersecurity Regulation for financial institutions. Developers should prepare for similar scrutiny, including extensive document requests and potential enforcement actions that could require operational changes.

Additionally, organizations that have utilized knowledge distillation from large models such as GPT-4 or Claude should determine whether they meet the revenue threshold for compliance, regardless of their direct training costs. Knowledge distillation alone does not trigger coverage, but if revenue exceeds $500 million and the model has been distilled from frontier models, it falls under the law.

Enforcement authority is vested in the Attorney General, who can impose civil penalties of up to $1 million for first violations and up to $3 million for subsequent violations, a reduction from earlier proposed penalties of $10 million and $30 million. The Attorney General may also pursue injunctive or declaratory relief against non-compliant developers.

For organizations deploying frontier models, it is crucial to update vendor contracts to address incident notification requirements and reference published safety protocols. This proactive approach fosters a clearer understanding of operational risks linked to regulatory compliance.

As the federal government evaluates state laws, potential constitutional challenges to the RAISE Act loom. The Commerce Department’s assessment may identify New York’s legislation as problematic, raising concerns about interstate commerce regulation, potential preemption by federal laws, and First Amendment implications regarding disclosure requirements. Until federal courts adjudicate these issues, the law will remain enforceable as of its effective date, compelling organizations to prepare compliance frameworks despite the potential for future legal challenges.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

New York mandates advertisers disclose AI-generated "synthetic performers" by June 2026, imposing fines of up to $5,000 for non-compliance.

AI Education

Illinois enacts new laws protecting immigrant students and regulating AI use in education, mandating human instruction at community colleges by 2025.

AI Regulation

New York's RAISE Act mandates $500M revenue threshold for AI developers, enforcing stringent safety measures effective January 2027 to mitigate catastrophic risks.

AI Research

DOE's Genesis Mission aims to double U.S. research productivity by 2035 through a transformative AI platform uniting 17 national labs and key partners like...

AI Generative

New York's Synthetic Performer Disclosure Law mandates clear labeling of AI-generated actors in advertising, setting a $5,000 penalty for non-compliance starting June 2026.

AI Regulation

New York's RAISE Act mandates AI developers like OpenAI to report safety incidents within 72 hours, imposing penalties up to $3M for noncompliance by...

Top Stories

New York's RAISE Act mandates strict AI safety regulations, imposing fines up to $3M for violations, positioning the state as a leader in tech...

AI Cybersecurity

Senate passes NDAA authorizing $8B for defense AI integration, mandating new cybersecurity measures and risk governance to counter threats from China and Russia

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.