Connect with us

Hi, what are you looking for?

AI Regulation

New York Enacts RAISE Act Mandating AI Safety Reporting for Developers by 2027

New York’s RAISE Act mandates AI developers like OpenAI to report safety incidents within 72 hours, imposing penalties up to $3M for noncompliance by 2027.

New York Governor Kathy Hochul signed the Responsible AI Safety and Education (RAISE) Act into law on Friday, establishing a framework intended to enhance safety and transparency for frontier AI models. The legislation mandates that large AI developers disclose their safety protocols and report any incidents related to model safety to the state within 72 hours of awareness. The law is set to take effect on January 1, 2027.

In her announcement, Hochul stated, “By enacting the RAISE Act, New York is once again leading the nation in setting a strong and sensible standard for frontier AI safety, holding the biggest developers accountable for their safety and transparency protocols.” The law aims to address growing concerns regarding the implications of AI technologies, following a surge in public interest and regulatory scrutiny.

The introduction of the RAISE Act represents a direct response to an executive order issued by former President Donald Trump earlier this month, which challenged the authority of states to regulate AI technologies. Hochul characterized the legislation as a “nation-leading approach to AI safety,” specifically noting its inspiration from California’s recently enacted Transparency in Frontier Artificial Intelligence Act.

“This law builds on California’s recently adopted framework, creating a unified benchmark among the country’s leading tech states as the federal government lags behind, failing to implement common-sense regulations that protect the public,” she added. The RAISE Act is the first state-level AI regulation enacted since Trump’s order, which aims to establish an AI Litigation Task Force to contest state laws that are deemed to interfere with existing federal regulations or “unconstitutionally regulate interstate commerce.”

New York’s RAISE Act will require prominent AI developers, including companies like OpenAI and Anthropic, to document their technical and organizational safety protocols as well as their testing and evaluation procedures. Furthermore, developers must appoint a senior personnel member responsible for compliance with these regulations. The law will also impose a reporting requirement for incidents such as unintended behaviors by AI models, technical failures, or security breaches.

Should developers fail to comply with the reporting requirements or provide false information, they could face civil penalties reaching up to $1 million for initial violations and up to $3 million for subsequent infractions. State Senator Andrew Gounardes remarked, “Big tech oligarchs think it’s fine to put their profits ahead of our safety — we disagree. With this law, we make clear that tech innovation and safety don’t have to be at odds.”

The RAISE Act aims to create a sense of accountability among AI developers, reflecting a national trend toward increased regulation of technology in response to public concerns. As states like New York and California set these precedents, it could pave the way for a more cohesive regulatory landscape, especially as the federal government continues to grapple with the complexities of AI governance.

As the dialogue around AI regulation continues, the implications of the RAISE Act may influence how other states approach similar legislation, especially in light of ongoing challenges posed by federal oversight. The contrasting positions on AI regulation between state and federal levels will likely fuel further debate on the future of technological governance in the United States.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Government

Microsoft commits $10 billion to Japan's AI and cybersecurity sectors by 2029, aiming to train one million engineers and enhance data security and infrastructure.

AI Technology

Harvard study reveals that 94% of professionals see AI as crucial for cybersecurity, yet many firms risk reputational damage by neglecting strategic training.

Top Stories

Microsoft shifts to independent AI development, targeting state-of-the-art models by 2027, fueled by Nvidia chips and a new strategic focus.

AI Finance

AI banking experts highlight JPMorgan Chase and Bank of America's automation success, driving operational efficiency and customer loyalty amid rising cyber threats.

AI Education

Vietnamese universities are restructuring curricula to integrate AI as a core competency, addressing the 40% job impact from AI by 2030 and enhancing student...

Top Stories

DeepSeek forecasts Nvidia's stock will surge 50% to $265 by 2026, driven by new technology and strong institutional confidence amid market challenges.

AI Generative

Google launches Gemma 4, an open-source AI suite with 26B and 31B models for local deployment, enhancing privacy and multimodal reasoning capabilities.

AI Finance

AI's role in finance shifts as GFT Technologies' Kaushal Sheth warns of emerging risks from autonomous systems that can amplify errors in volatile markets.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.