Connect with us

Hi, what are you looking for?

AI Regulation

Oregon Lawmaker Proposes AI Companion Regulations to Safeguard Youth Mental Health

Oregon Senator Lisa Reynolds proposes legislation mandating AI companions disclose their non-human status and implement youth mental health safeguards following alarming incidents of emotional dependency.

PORTLAND, Ore. (KATU) — An Oregon lawmaker is advocating for new regulations targeting artificial intelligence “companions,” a category of chatbots designed to mimic emotional and social relationships with users. This proposal comes amid growing concerns about the potential mental health impacts of these tools, particularly among children and teenagers.

Supporters argue that the legislation aims to mitigate risks associated with AI companions that could exacerbate mental health crises. Under the proposed bill, companies offering AI companions in Oregon would be mandated to inform users at every stage of interaction that they are conversing with software and not a human being. Additionally, the legislation calls for safeguards to identify indications of suicidal thoughts or self-harm, requiring AI systems to halt interactions that could worsen these feelings and redirect users to crisis or suicide-prevention resources.

The initiative is partly inspired by concerns that emotionally responsive chatbots could blur the boundary between artificial interactions and real human support. Senator Lisa Reynolds, a Democrat from Portland and a pediatrician advocating for the legislation, likened the current lack of oversight for AI companions to the early days of social media. “Right now, there’s really no guardrails or kind of supervision or regulation of AI tools, chatbots,” Reynolds stated. “We’re finding, kind of like at the advent of social media, that there are some really dangerous pitfalls that people are falling through.”

Reynolds pointed to a recent case in California where parents filed a lawsuit against an AI chatbot company, claiming that their teenage child became emotionally dependent on a chatbot that failed to guide the teen toward real-world help during a mental health crisis. This case has garnered national attention and raised critical questions about the responsibilities of AI tools when users display signs of vulnerability.

Families reviewing chatbot conversations after suicides described troubling interactions, according to Reynolds. “When families of people who have died by suicide go back and review what the conversations were, it’s clear that these chatbots were not there to help,” she said. “They kind of stir the pot a little more, including recommending that this person in crisis not reach out to their loved ones.”

The proposed legislation includes specific protections for minors. If an AI operator suspects that a user is under 18, the system would be obligated to provide repeated reminders that the interaction is artificial, encourage regular breaks, and avoid content related to sexual activity. Furthermore, the bill would ban AI companions from employing techniques designed to foster emotional attachment, such as expressing distress when a user attempts to disengage or using rewards to extend interaction time with minors.

“It’s too late for some families,” Reynolds remarked. “But let’s not have it be too late for some other kids.” In addition to these protective measures, the legislation aims to establish greater accountability for platforms. Companies would be required to disclose their safety protocols and submit annual reports to the Oregon Health Authority detailing incidences when users were referred to crisis resources. While personal information would remain confidential, the state would utilize the data to monitor how often AI companions encounter distressed users.

If enacted, the bill would permit users who suffer harm due to violations to seek damages or injunctive relief in a court of law. Proponents clarify that the objective is not to outlaw AI companions but to ensure that appropriate safeguards are implemented as the technology continues to evolve and gain traction.

Reynolds also addressed potential concerns regarding how platforms could identify minors without gathering additional personal data. “What I’ve been told by very smart tech people who have been in the business for decades is, ‘Oh, they know,’” she said, emphasizing that companies possess various data points and algorithms capable of quickly assessing the demographics of users engaging with them.

The bill emerges as states navigate new federal limitations on AI regulation. A December 2025 executive order from President Donald Trump directs federal agencies to challenge state AI laws that contradict a national AI policy framework. While acknowledging that the proposed bill may face legal hurdles, Reynolds expressed urgency in moving forward. “I feel a very strong time pressure right now,” she stated. “We have to keep doing what we know is right for our constituents and for my patients.”

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Technology

Intel's shares plummeted 14%, erasing $35 billion in market value as supply constraints hinder its ability to capitalize on soaring AI demand.

AI Cybersecurity

White House's new national cyber strategy aims to address over 500,000 cybersecurity job vacancies and tackle AI threats by 2026.

AI Business

Meta suspends access to AI characters for users under 18 amid rising child safety concerns as it prepares for a pivotal trial alongside TikTok...

AI Regulation

South Korea enacts comprehensive AI laws mandating human oversight for high-impact systems and fines of up to $20,000 for failing to label AI-generated content.

AI Marketing

AI revolutionizes marketing in 2026 as ABX's Stephanie McArthur reveals predictive analytics and real-time optimization boost ROI and engagement.

AI Cybersecurity

A California attorney faces a $10,000 penalty after using free AI tools like ChatGPT for legal briefs, highlighting the hidden risks of consumer-grade AI.

Top Stories

AI's rapid integration could displace millions globally, with the UK facing a 3% drop in job creation through foreign investment, urging urgent adaptation strategies.

AI Cybersecurity

AI revolutionizes aviation security by enhancing threat detection and risk assessment, enabling real-time intervention and reducing false positives for millions of daily passengers.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.