Connect with us

Hi, what are you looking for?

AI Regulation

New York’s New AI Law Sets $500M Revenue Threshold, Effective January 2027

New York’s RAISE Act mandates $500M revenue threshold for AI developers, enforcing stringent safety measures effective January 2027 to mitigate catastrophic risks.

New York has officially joined California in leading the charge for artificial intelligence regulation in the United States. On Friday, Governor Kathy Hochul signed a revised version of the Responsible AI Safety and Education Act, known as the RAISE Act, which establishes stringent safety obligations for developers of advanced AI systems. While the final version of the law is less punitive than its June predecessor, it represents a significant development in state-level AI safety legislation. Businesses should prepare for its implementation, set to take effect on January 1, 2027, by understanding its implications and requirements.

The RAISE Act focuses on mitigating catastrophic risks associated with highly capable AI systems, distinguishing it from other state laws that emphasize bias and consumer protection. The law specifically targets “frontier models,” which are the most advanced AI systems capable of causing serious harm, including cyberattacks and infrastructure damage. It applies to AI developers with annual revenues exceeding $500 million and those operating frontier AI models in New York.

One key aspect of the RAISE Act is its core safety obligations, which include four mandatory requirements for covered companies. First, developers must create and adhere to written safety protocols and conduct thorough assessments of potential risks that their AI systems could pose. This includes implementing safeguards to prevent or mitigate risks of “critical harm” to individuals or property.

Second, AI developers are required to report critical safety incidents to the state within 72 hours of their occurrence, a significantly shorter timeframe than California’s 15-day reporting period. This rapid notification is aimed at ensuring prompt action in the event of safety breaches.

Third, the law establishes a new AI oversight office within the New York Department of Financial Services (DFS). This office will oversee registration for covered developers, collect fees for regulatory support, issue regulations, and publish annual reports detailing AI safety risks. Lastly, enforcement of the law will fall to the New York Attorney General, who can impose penalties of up to $1 million for first violations and up to $3 million for subsequent violations. Notably, the law does not provide a private right of action for individuals seeking to file lawsuits.

The RAISE Act’s effective date allows time for regulators to establish the oversight framework and gives companies time to prepare for compliance. Businesses must remain vigilant, as the regulatory landscape continues to evolve, especially in light of recent federal challenges to state-level AI regulations. President Trump’s executive order authorizing federal lawsuits against states with AI laws perceived as stifling innovation could complicate the regulatory environment, while some Congressional Republicans are advocating for proposals that would limit state-level AI regulation.

As the industry anticipates these developments, companies are advised to take proactive steps in preparation. Even organizations not directly developing frontier AI models should keep abreast of the RAISE Act and use 2026 to implement necessary measures. This may include conducting vendor diligence to assess whether AI models utilized by third parties fall under the law’s definitions and understanding how they manage safety risks. Additionally, businesses should consider incorporating AI safety disclosures into procurement agreements and maintaining clear internal governance policies to navigate the evolving regulatory landscape.

New York and California are setting early benchmarks in AI regulation, and their models may influence other states as they formulate their own frameworks. Consequently, businesses operating across multiple states should be prepared for a patchwork of compliance requirements. As federal intervention looms, the potential for a comprehensive national approach remains uncertain, but it is essential for companies to stay informed and adaptable.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

Global semiconductor giants like TSMC and Samsung face capped innovation under new U.S.-China export controls, limiting advanced tech upgrades and reshaping supply chains.

AI Tools

Over 60% of U.S. consumers now rely on AI platforms for primary digital interactions, signaling a major shift in online commerce and user engagement.

Top Stories

As AI demand surges, Vertiv and Arista Networks report staggering revenue growths of 70.4% and 92.8%, outpacing Alphabet and Microsoft in 2026.

AI Marketing

Interact Marketing warns that unchecked AI content creation threatens brand integrity, with a notable decline in quality standards and rising consumer fatigue.

Top Stories

Wedbush sets an ambitious $625 target for Microsoft, highlighting a pivotal year for AI growth as the company aims for $326.35 billion in revenue.

AI Technology

Western Digital shares fell 2.2% to $172.27 as investors reassess profit-taking after a year where stock value tripled amid AI-driven storage demand.

AI Regulation

As the U.S. enacts the Cyber Incident Reporting for Critical Infrastructure Act, firms face 72-hour reporting mandates, elevating compliance costs and legal risks.

AI Regulation

California implements new AI regulations in 2026, including protections for minors and accountability for deepfake content, positioning itself as a national leader in AI...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.