Connect with us

Hi, what are you looking for?

AI Regulation

New York Enforces AI Companion Model and RAISE Act for Enhanced AI Safety Regulations

New York mandates AI companions to implement safety protocols for suicidal ideation detection, imposing fines up to $15,000 daily for non-compliance.

The state of New York has introduced significant regulations governing artificial intelligence (AI) companions, mandating operators and providers to establish safety protocols aimed at detecting and responding to users expressing suicidal thoughts or self-harm. Under the new AI Companion Model law, which will come into effect shortly, these operators must also clarify to users that they are interacting with an AI, not a human being. This initiative underscores the state’s commitment to ensuring user safety in the rapidly evolving landscape of AI technologies.

The AI Companion Model law encompasses all operators of AI companions used by residents of New York. Defined broadly, “AI companions” include systems that leverage AI, generative AI, and “emotional recognition algorithms” to foster simulated human-like relationships. This may involve retaining user interaction histories, personalizing experiences, and engaging users through unsolicited emotional inquiries. The law explicitly excludes AI systems utilized purely for customer service or productivity purposes.

Among the key requirements, operators must provide a clear notification to users, either verbally or in writing, indicating that they are not communicating with a human. This notice is required at least every three hours during ongoing interactions but need not be repeated more than once daily. Furthermore, operators are mandated to implement measures for detecting and addressing any expressions of suicidal ideation or self-harm. Upon such detection, operators must refer users to relevant crisis services.

The enforcement of the AI Companion Model law will be the responsibility of the state attorney general, who is empowered to impose civil penalties of up to $15,000 per day for violations related to notifications and safety measures. This stringent regulatory framework reflects a growing recognition of the potential psychological impacts of AI technologies.

Operators affected by this legislation are advised to strategize on how to clearly communicate the non-human nature of their services, including determining the frequency and manner of these notifications. They will also need to establish protocols for identifying and managing suicidal ideation or self-harm among users.

New York’s approach mirrors similar legislation in California, which passed a comparable law in October 2025, set to take effect on January 1, 2026. California’s SB 243 closely aligns with New York’s requirements but includes specific variations concerning notification practices and protections for minors. As states grapple with the implications of AI technologies, regulatory efforts are expanding, with Utah, Colorado, and Kentucky also beginning to implement their own frameworks to govern AI and chatbot interactions.

In a broader legislative context, New York also passed the Responsible Artificial Intelligence Safety and Education (RAISE) Act, effective January 1, 2027. This act targets developers of frontier AI models, requiring transparency and accountability measures. The RAISE Act mandates that developers conduct annual safety reviews and independent audits, publish safety protocols, and report any significant safety incidents within a defined timeframe. The legislation aims to mitigate risks associated with advanced AI systems, particularly those capable of causing significant harm.

Amendments to the RAISE Act were signed into law on December 19, 2025, reflecting ongoing discussions among state leaders about the need for comprehensive AI regulations. The act applies to large-scale AI systems, defined as those requiring more than 10²⁶ computational operations and incurring costs exceeding $100 million.

Amid rising concerns regarding the safety of AI technologies, the RAISE Act emphasizes prohibiting the deployment of models posing an “unreasonable risk of critical harm.” Developers will be required to maintain detailed records and ensure their systems do not engage in activities that could lead to significant negative outcomes, such as serious injuries or significant economic damage.

The New York Department of Financial Services is set to establish a dedicated office to oversee AI development. Civil penalties for non-compliance with the RAISE Act could reach up to $1 million for initial violations and $3 million for subsequent breaches, a marked reduction from earlier proposed penalties. The act also includes protections for whistleblowers, aimed at fostering a culture of safety and accountability within the AI industry.

As AI technologies continue to evolve, states are increasingly recognizing the necessity for regulatory frameworks to safeguard users. New York’s proactive measures reflect a broader trend among states to address the complexities associated with AI and ensure that developers prioritize safety and ethical considerations in their innovations. With additional legislation anticipated in 2026, the regulatory landscape for AI is expected to expand further, responding to emerging challenges in the sector.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

Cohere Health launches a Global Capability Center in Hyderabad to enhance clinical AI solutions, leveraging India's talent to drive innovation and improve patient outcomes.

AI Regulation

New Westminster City Councillor Tasha Henderson proposes a citywide AI policy to address resident concerns over automated communications and ensure responsible use.

Top Stories

University of Washington and Microsoft expand partnership to enhance AI workforce readiness with $165M investment in education and research initiatives.

AI Generative

AI-powered music video generators now offer over 90% lip-sync accuracy, empowering independent artists to produce high-quality visuals with unprecedented customization.

AI Government

Pentagon demands unrestricted access to Anthropic's Claude AI by 5:01 p.m., threatening to invoke the Defense Production Act if denied amid a sovereignty crisis.

AI Cybersecurity

Major MNCs in India restrict SDE access to AI tools, citing data security concerns, hindering innovation despite rapid advancements in technology.

AI Tools

Salesforce launches Agentforce for Communications, enhancing telecom operations with AI-driven tools that boost engagement by 4x and save teams over 300 hours weekly.

Top Stories

Walmart's HR Chief Donna Morris warns that without enhanced AI training for U.S. workers, America risks losing its competitive edge as China introduces AI...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.