Connect with us

Hi, what are you looking for?

AI Regulation

New York’s New AI Law Sets $500M Revenue Threshold, Effective January 2027

New York’s RAISE Act mandates $500M revenue threshold for AI developers, enforcing stringent safety measures effective January 2027 to mitigate catastrophic risks.

New York has officially joined California in leading the charge for artificial intelligence regulation in the United States. On Friday, Governor Kathy Hochul signed a revised version of the Responsible AI Safety and Education Act, known as the RAISE Act, which establishes stringent safety obligations for developers of advanced AI systems. While the final version of the law is less punitive than its June predecessor, it represents a significant development in state-level AI safety legislation. Businesses should prepare for its implementation, set to take effect on January 1, 2027, by understanding its implications and requirements.

The RAISE Act focuses on mitigating catastrophic risks associated with highly capable AI systems, distinguishing it from other state laws that emphasize bias and consumer protection. The law specifically targets “frontier models,” which are the most advanced AI systems capable of causing serious harm, including cyberattacks and infrastructure damage. It applies to AI developers with annual revenues exceeding $500 million and those operating frontier AI models in New York.

One key aspect of the RAISE Act is its core safety obligations, which include four mandatory requirements for covered companies. First, developers must create and adhere to written safety protocols and conduct thorough assessments of potential risks that their AI systems could pose. This includes implementing safeguards to prevent or mitigate risks of “critical harm” to individuals or property.

Second, AI developers are required to report critical safety incidents to the state within 72 hours of their occurrence, a significantly shorter timeframe than California’s 15-day reporting period. This rapid notification is aimed at ensuring prompt action in the event of safety breaches.

Third, the law establishes a new AI oversight office within the New York Department of Financial Services (DFS). This office will oversee registration for covered developers, collect fees for regulatory support, issue regulations, and publish annual reports detailing AI safety risks. Lastly, enforcement of the law will fall to the New York Attorney General, who can impose penalties of up to $1 million for first violations and up to $3 million for subsequent violations. Notably, the law does not provide a private right of action for individuals seeking to file lawsuits.

The RAISE Act’s effective date allows time for regulators to establish the oversight framework and gives companies time to prepare for compliance. Businesses must remain vigilant, as the regulatory landscape continues to evolve, especially in light of recent federal challenges to state-level AI regulations. President Trump’s executive order authorizing federal lawsuits against states with AI laws perceived as stifling innovation could complicate the regulatory environment, while some Congressional Republicans are advocating for proposals that would limit state-level AI regulation.

As the industry anticipates these developments, companies are advised to take proactive steps in preparation. Even organizations not directly developing frontier AI models should keep abreast of the RAISE Act and use 2026 to implement necessary measures. This may include conducting vendor diligence to assess whether AI models utilized by third parties fall under the law’s definitions and understanding how they manage safety risks. Additionally, businesses should consider incorporating AI safety disclosures into procurement agreements and maintaining clear internal governance policies to navigate the evolving regulatory landscape.

New York and California are setting early benchmarks in AI regulation, and their models may influence other states as they formulate their own frameworks. Consequently, businesses operating across multiple states should be prepared for a patchwork of compliance requirements. As federal intervention looms, the potential for a comprehensive national approach remains uncertain, but it is essential for companies to stay informed and adaptable.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

Expedia Group reports 11% Q4 revenue growth to $3.5 billion, fueled by AI-driven travel discovery and a 24% surge in B2B bookings to $8.7...

AI Regulation

India introduces a groundbreaking AI governance framework with seven guiding principles, prioritizing transparency and accountability while addressing bias and misuse ahead of the AI...

AI Regulation

India unveils its first AI governance framework with seven guiding principles, aiming to balance innovation and safeguards ahead of the Impact Summit 2026.

Top Stories

The US joins a coalition of 10 nations at the India AI Impact Summit 2026 to tackle economic challenges and showcase AI innovations across...

Top Stories

Market volatility is poised to escalate as AI concerns and geopolitical tensions heighten, with investors eyeing crucial U.S. labor data amid mixed earnings reports.

Top Stories

India ranks third in the global AI landscape with a score of 21.59, surpassing the UK and Germany, while bolstering its R&D and talent...

AI Marketing

ParOne appoints Sheryn Richards to spearhead marketing for AVA Golf's AI-driven performance platform, aiming to redefine golfer analytics and insights.

Top Stories

AI vulnerabilities are now the fastest-growing cyber risk, with 87% of organizations facing escalating threats, according to the WEF's 2026 report.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.