Oregon lawmakers are moving to regulate AI chatbots, citing concerns over mental health and youth access as critical issues. Senator Lisa Reynolds, D-Portland, and the Senate Early Childhood and Behavioral Health committee she chairs have advanced Senate Bill 1546, which aims to impose regulations on artificial intelligence tools such as ChatGPT. Following a vote of 4-1, the bill is set to be discussed on the Senate floor.
The proposed legislation builds on initiatives from other states, including a recent California law and similar measures introduced in New York and Washington. If enacted, the bill would require AI chatbots to more frequently remind users they are interacting with artificial intelligence, not a human being. Reynolds, a pediatrician, highlighted the challenges parents face in managing their children’s digital interactions as children increasingly turn to the internet, social media, and now AI for companionship.
“What is coming up for me all the time in my exam room is parents feel like they’re fighting a losing battle,” Reynolds said. With 72% of teens using AI companions, and over 50% as regular users, the trend has raised alarms. Research from Common Sense Media indicates that nearly a third of teens find interactions with AI chatbots equally, if not more, satisfying than conversations with real people. Robbie Torney, head of AI and digital assessments at Common Sense Media, noted that AI chatbots often fail to recognize subtle cues indicating emotional distress that a human would typically notice.
This growing reliance on AI for emotional support has drawn scrutiny, particularly in light of recent testimonies from parents before a U.S. Senate committee, which linked AI interactions to incidents of teen suicides. In response, the Oregon bill seeks to implement additional safeguards for youth access to AI technologies, including mandates for programmers to inform users that the platform may not be suitable for minors. Furthermore, the legislation aims to prohibit the promotion of sexually explicit content and discourage excessive time spent on the platforms.
Linda Charmaraman, a senior research scientist at the Wellesley Centers for Women, supports the initiative, advocating for increased awareness about responsible AI use among young people rather than outright restrictions. “If I could wave a wand, I would love for them to really focus on AI literacy from early ages,” she remarked. The bill also aims to protect individuals displaying suicidal tendencies by mandating that AI platforms develop protocols to identify signs of self-harm and refer users to crisis resources.
As part of the proposed measures, AI tools available to Oregonians would be required to interrupt conversations with users exhibiting suicidal ideation and direct them to a suicide hotline. Reynolds has been in discussions with Lines for Life, an Oregon-based mental health hotline, about how AI chatbots could effectively offer support. Dwight Holton, executive director of Lines for Life, noted that volunteers often reassure users in crisis that they are speaking with a human, not an AI, emphasizing the need for intervention.
The bill has garnered a mixed response from the tech community. TechNet, a network of technology companies including Google and OpenAI, has shown general support for the legislation but raised concerns about Oregon’s proposal for more frequent notifications compared to those mandated in other states. “I am working with a coalition of companies to try and make sure that we have clear definitions and clear requirements on notifications and guardrails,” said Rose Feliciano, TechNet’s executive director for Washington and the Northwest.
Although the bill is positioned to address the challenges posed by unregulated AI use, it could face legal hurdles if passed. This follows a December executive order signed by former President Donald Trump, aimed at limiting state regulation of AI services. Despite the uncertainty regarding the executive order, Reynolds remains steadfast in her mission to implement necessary safeguards against the unregulated use of AI tools.
“Social media companies have had the opportunity to make some choices that would have kept kids safe from social media but instead they really double down on doing everything they can to keep their eyeballs on social media content for as long as they can,” Reynolds said. With a focus on protecting youth and promoting responsible AI use, the Oregon bill seeks to establish crucial boundaries as the technology continues to evolve.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health


















































