The state of New York has introduced significant regulations governing artificial intelligence (AI) companions, mandating operators and providers to establish safety protocols aimed at detecting and responding to users expressing suicidal thoughts or self-harm. Under the new AI Companion Model law, which will come into effect shortly, these operators must also clarify to users that they are interacting with an AI, not a human being. This initiative underscores the state’s commitment to ensuring user safety in the rapidly evolving landscape of AI technologies.
The AI Companion Model law encompasses all operators of AI companions used by residents of New York. Defined broadly, “AI companions” include systems that leverage AI, generative AI, and “emotional recognition algorithms” to foster simulated human-like relationships. This may involve retaining user interaction histories, personalizing experiences, and engaging users through unsolicited emotional inquiries. The law explicitly excludes AI systems utilized purely for customer service or productivity purposes.
Among the key requirements, operators must provide a clear notification to users, either verbally or in writing, indicating that they are not communicating with a human. This notice is required at least every three hours during ongoing interactions but need not be repeated more than once daily. Furthermore, operators are mandated to implement measures for detecting and addressing any expressions of suicidal ideation or self-harm. Upon such detection, operators must refer users to relevant crisis services.
The enforcement of the AI Companion Model law will be the responsibility of the state attorney general, who is empowered to impose civil penalties of up to $15,000 per day for violations related to notifications and safety measures. This stringent regulatory framework reflects a growing recognition of the potential psychological impacts of AI technologies.
Operators affected by this legislation are advised to strategize on how to clearly communicate the non-human nature of their services, including determining the frequency and manner of these notifications. They will also need to establish protocols for identifying and managing suicidal ideation or self-harm among users.
New York’s approach mirrors similar legislation in California, which passed a comparable law in October 2025, set to take effect on January 1, 2026. California’s SB 243 closely aligns with New York’s requirements but includes specific variations concerning notification practices and protections for minors. As states grapple with the implications of AI technologies, regulatory efforts are expanding, with Utah, Colorado, and Kentucky also beginning to implement their own frameworks to govern AI and chatbot interactions.
In a broader legislative context, New York also passed the Responsible Artificial Intelligence Safety and Education (RAISE) Act, effective January 1, 2027. This act targets developers of frontier AI models, requiring transparency and accountability measures. The RAISE Act mandates that developers conduct annual safety reviews and independent audits, publish safety protocols, and report any significant safety incidents within a defined timeframe. The legislation aims to mitigate risks associated with advanced AI systems, particularly those capable of causing significant harm.
Amendments to the RAISE Act were signed into law on December 19, 2025, reflecting ongoing discussions among state leaders about the need for comprehensive AI regulations. The act applies to large-scale AI systems, defined as those requiring more than 10²⁶ computational operations and incurring costs exceeding $100 million.
Amid rising concerns regarding the safety of AI technologies, the RAISE Act emphasizes prohibiting the deployment of models posing an “unreasonable risk of critical harm.” Developers will be required to maintain detailed records and ensure their systems do not engage in activities that could lead to significant negative outcomes, such as serious injuries or significant economic damage.
The New York Department of Financial Services is set to establish a dedicated office to oversee AI development. Civil penalties for non-compliance with the RAISE Act could reach up to $1 million for initial violations and $3 million for subsequent breaches, a marked reduction from earlier proposed penalties. The act also includes protections for whistleblowers, aimed at fostering a culture of safety and accountability within the AI industry.
As AI technologies continue to evolve, states are increasingly recognizing the necessity for regulatory frameworks to safeguard users. New York’s proactive measures reflect a broader trend among states to address the complexities associated with AI and ensure that developers prioritize safety and ethical considerations in their innovations. With additional legislation anticipated in 2026, the regulatory landscape for AI is expected to expand further, responding to emerging challenges in the sector.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health


















































