In a significant move that could redefine the landscape of artificial intelligence governance, New York Governor Kathy Hochul has enacted the RAISE Act, positioning the state as a frontrunner in AI safety legislation. This legislation arrives amid stagnation at the federal level, establishing a robust regulatory framework that technology companies must now navigate. For those in the cryptocurrency and blockchain sectors monitoring regulatory trends, this development highlights the increasing urgency with which governments are approaching oversight of emerging technologies.
The RAISE Act introduces comprehensive requirements for AI developers operating within New York. It is being characterized as the most stringent state-level AI regulation framework in the United States. Key components of the legislation require that large AI developers disclose detailed safety testing information and risk mitigation protocols, report safety incidents to state authorities within 72 hours, and establish a new dedicated AI monitoring office under the Department of Financial Services. Violations carry substantial penalties, with fines reaching up to $1 million for initial offenses and $3 million for repeat infractions.
The path to the enactment of the RAISE Act was fraught with political challenges. After state lawmakers passed the bill in June, intense lobbying from the tech industry led Governor Hochul to propose significant amendments. According to reporting from The New York Times, a compromise was ultimately reached: Hochul agreed to sign the bill in its original form, while lawmakers committed to addressing her requested revisions in the subsequent legislative session. This scenario underscores the ongoing tension between fostering innovation and implementing necessary oversight. State Senator Andrew Gounardes, one of the bill’s sponsors, remarked, “Big Tech thought they could weasel their way into killing our bill. We shut them down and passed the strongest AI safety law in the country.”
New York is not alone in its regulatory efforts. California Governor Gavin Newsom signed similar legislation in September, resulting in what Governor Hochul describes as a “unified benchmark” between the two prominent technology states. Both legislative frameworks aim to establish clearer safety requirements and risk assessments for AI technologies, with New York’s RAISE Act and California’s recent safety bill sharing comparable penalty structures.
Hochul emphasized the importance of this coordinated approach, stating, “This law builds on California’s recently adopted framework, creating a unified benchmark among the country’s leading tech states as the federal government lags behind, failing to implement common-sense regulations that protect the public.”
Reactions from the technology sector have been varied. Prominent companies such as OpenAI and Anthropic have expressed support for the RAISE Act while simultaneously advocating for comprehensive federal legislation to create cohesive standards. Anthropic’s Sarah Heck noted in an interview with the NYT that state-level initiatives should inspire Congress to build upon them. However, opposition is evident as well, with a super PAC backed by Andreessen Horowitz and OpenAI President Greg Brockman reportedly targeting Assemblyman Alex Bores, a co-sponsor of the bill.
Amid these challenges, federal authorities are also responding. Recently, President Donald Trump issued an executive order directing federal agencies to contest state-level AI laws, a move backed by his AI czar David Sacks. This initiative signifies an ongoing struggle to delineate the boundaries of state versus federal regulatory authority, and it is expected to encounter legal challenges.
For businesses engaged in AI development, the enactment of the RAISE Act necessitates immediate compliance considerations. Companies must document their safety protocols, establish clear incident response plans, and prepare for a landscape where both state and federal regulations may dictate operational practices. Financial planning should account for potential regulatory costs and the risk of penalties.
New York’s RAISE Act signifies more than just another set of regulations; it marks a pivotal shift in societal attitudes toward artificial intelligence governance. As state governments take the lead where federal initiatives falter, a patchwork regulatory environment is emerging that could either stimulate innovation through clear guidelines or complicate it through inconsistent requirements. This dynamic also reflects the broader tensions seen in the cryptocurrency and blockchain sectors, where the balancing act between innovation and protection continues to play out.
The RAISE Act stands as a transformative moment in AI safety legislation, establishing essential requirements, imposing significant penalties, and creating dedicated oversight mechanisms. Governor Hochul’s actions position New York at the vanguard of responsible AI development, setting the stage for potential collaboration with California in creating a cohesive regulatory approach. The following months will be crucial in determining how effectively this framework operates, how the industry adapts to the new requirements, and whether federal lawmakers will follow the states’ lead.
For further insights into the evolving landscape of artificial intelligence regulation and policy, explore the latest developments shaping AI governance and institutional adoption.
See also
Al Jazeera Unveils ‘The Core’: A Game-Changing AI Model Transforming Journalism with Google Cloud
Google Gemini 3 Surpasses ChatGPT as AI Race Intensifies Amid Data Monopoly Concerns
Integrating AI Ethics into Education: Essential Skills for Navigating Digital Risks
Microsoft CEO Demands Total AI Commitment Amid Operational Struggles and Investor Pressure
Two South Africans Excel at Google DeepMind, Pioneering AI Research Together



















































