Connect with us

Hi, what are you looking for?

AI Regulation

New York Enforces AI Companion Model and RAISE Act for Enhanced AI Safety Regulations

New York mandates AI companions to implement safety protocols for suicidal ideation detection, imposing fines up to $15,000 daily for non-compliance.

The state of New York has introduced significant regulations governing artificial intelligence (AI) companions, mandating operators and providers to establish safety protocols aimed at detecting and responding to users expressing suicidal thoughts or self-harm. Under the new AI Companion Model law, which will come into effect shortly, these operators must also clarify to users that they are interacting with an AI, not a human being. This initiative underscores the state’s commitment to ensuring user safety in the rapidly evolving landscape of AI technologies.

The AI Companion Model law encompasses all operators of AI companions used by residents of New York. Defined broadly, “AI companions” include systems that leverage AI, generative AI, and “emotional recognition algorithms” to foster simulated human-like relationships. This may involve retaining user interaction histories, personalizing experiences, and engaging users through unsolicited emotional inquiries. The law explicitly excludes AI systems utilized purely for customer service or productivity purposes.

Among the key requirements, operators must provide a clear notification to users, either verbally or in writing, indicating that they are not communicating with a human. This notice is required at least every three hours during ongoing interactions but need not be repeated more than once daily. Furthermore, operators are mandated to implement measures for detecting and addressing any expressions of suicidal ideation or self-harm. Upon such detection, operators must refer users to relevant crisis services.

The enforcement of the AI Companion Model law will be the responsibility of the state attorney general, who is empowered to impose civil penalties of up to $15,000 per day for violations related to notifications and safety measures. This stringent regulatory framework reflects a growing recognition of the potential psychological impacts of AI technologies.

Operators affected by this legislation are advised to strategize on how to clearly communicate the non-human nature of their services, including determining the frequency and manner of these notifications. They will also need to establish protocols for identifying and managing suicidal ideation or self-harm among users.

New York’s approach mirrors similar legislation in California, which passed a comparable law in October 2025, set to take effect on January 1, 2026. California’s SB 243 closely aligns with New York’s requirements but includes specific variations concerning notification practices and protections for minors. As states grapple with the implications of AI technologies, regulatory efforts are expanding, with Utah, Colorado, and Kentucky also beginning to implement their own frameworks to govern AI and chatbot interactions.

In a broader legislative context, New York also passed the Responsible Artificial Intelligence Safety and Education (RAISE) Act, effective January 1, 2027. This act targets developers of frontier AI models, requiring transparency and accountability measures. The RAISE Act mandates that developers conduct annual safety reviews and independent audits, publish safety protocols, and report any significant safety incidents within a defined timeframe. The legislation aims to mitigate risks associated with advanced AI systems, particularly those capable of causing significant harm.

Amendments to the RAISE Act were signed into law on December 19, 2025, reflecting ongoing discussions among state leaders about the need for comprehensive AI regulations. The act applies to large-scale AI systems, defined as those requiring more than 10²⁶ computational operations and incurring costs exceeding $100 million.

Amid rising concerns regarding the safety of AI technologies, the RAISE Act emphasizes prohibiting the deployment of models posing an “unreasonable risk of critical harm.” Developers will be required to maintain detailed records and ensure their systems do not engage in activities that could lead to significant negative outcomes, such as serious injuries or significant economic damage.

The New York Department of Financial Services is set to establish a dedicated office to oversee AI development. Civil penalties for non-compliance with the RAISE Act could reach up to $1 million for initial violations and $3 million for subsequent breaches, a marked reduction from earlier proposed penalties. The act also includes protections for whistleblowers, aimed at fostering a culture of safety and accountability within the AI industry.

As AI technologies continue to evolve, states are increasingly recognizing the necessity for regulatory frameworks to safeguard users. New York’s proactive measures reflect a broader trend among states to address the complexities associated with AI and ensure that developers prioritize safety and ethical considerations in their innovations. With additional legislation anticipated in 2026, the regulatory landscape for AI is expected to expand further, responding to emerging challenges in the sector.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Technology

Hyundai unveils next-gen electric Atlas prototype as U.S. launches ambitious Genesis Mission to advance AI competitiveness amid rapid global developments.

Top Stories

NVIDIA's Jensen Huang predicts AI will create new industries and increase job demand by 2025, as a billion robots drive the largest maintenance sector...

AI Technology

AI and advertising are poised for a $10 billion growth surge by 2026, driven by emerging trends reshaping the industry landscape.

AI Business

Australia's AI sector achieved a historic $839 million in funding for 2025, heavily concentrated in four major companies, signaling a shift towards late-stage investments.

AI Technology

GitHub Copilot enhances AI-assisted coding with context engineering, enabling developers to implement custom instructions and reusable prompts for improved code quality and efficiency.

AI Generative

Locai Labs halts image generation services and bans users under 18, as CEO James Drayson warns all AI models risk producing harmful content.

Top Stories

Cyber-enabled fraud has overtaken ransomware as the top corporate risk, with 73% of CEOs reporting its impact by 2025, highlighting urgent AI vulnerabilities.

AI Marketing

McDonald's invests in AI to enhance order accuracy to 95% with new tech initiatives, aiming to revolutionize customer interactions and operational efficiency.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.