Connect with us

Hi, what are you looking for?

AI Regulation

Washington Governor Bob Ferguson Proposes AI Chatbot Regulations Amid Teen Suicide Concerns

Washington Governor Bob Ferguson proposes Senate Bill 5984 to regulate AI chatbots amid rising concerns over one-third of U.S. teens relying on them for emotional support.

The increasing reliance of minors on AI chatbots for emotional support has raised significant concerns among regulators in Washington state. Approximately one-third of U.S. teens report depending on AI companions, prompting state officials to take action following several tragic cases of teen suicide linked to chatbot interactions.

In response to these incidents, Washington Governor Bob Ferguson has urged legislators to introduce Senate Bill 5984, which aims to establish critical safeguards for chatbot usage, particularly among minors. The proposed legislation would mandate that chatbots like ChatGPT remind users at the start of a conversation, and every three hours thereafter, that they are interacting with a robot, not a human. This rule would apply specifically to underage users, who would also receive additional protections.

Under the new regulations, chatbots would be prohibited from engaging in sexually explicit conversations with minors and required to direct users to mental health services if they exhibit signs of self-harm, including issues like eating disorders. The urgency of the bill is underscored by recent legal settlements, including one involving Google and Character.AI, which faced lawsuits alleging their chatbots contributed to teen mental health crises.

A spokesperson for Character.AI stated that the company is reviewing the Washington bill and is eager to collaborate with regulators to implement AI safety measures. In light of the circumstances, the company has recently halted open-ended chats with minors after a tragic incident in which a teenager formed a strong emotional attachment to a chatbot, leading to his suicide.

“Our highest priority is the safety and well-being of our users, including younger audiences,” the spokesperson remarked. Meanwhile, another case involving OpenAI’s ChatGPT continues to revolve in litigation, with both OpenAI and Google not providing comments on the pending Washington legislation.

Washington State Senator Lisa Wellman, who is sponsoring the bill, highlighted the gravity of the situation, stating, “We have now several actual cases where chatbots are being involved in child suicide. That is the visible part of what you might be seeing in terms of harm. There are other cases where children are emotionally devastated because of AI.”

Wellman further articulated that while the full extent of AI’s impact on youth is still being assessed, it is clear that chatbots can forge emotional dependencies and influence children’s behavior and mental health.

The state legislature is not acting in isolation; it is working towards a coordinated regulatory framework with neighboring states such as California and Oregon. This initiative comes amid a backdrop of federal regulatory challenges, as last month former President Donald Trump signed an executive order seeking to preempt state-level AI regulations. This federal push aims to enhance the U.S.’s competitiveness in emerging technologies, although the legality of the order is currently under scrutiny.

As discussions unfold in Washington, Wellman emphasized the necessity of being proactive in addressing potential harms associated with AI technologies. “We want to be ahead of any further damage and harm that can be done by a technology that is on the market,” she said, capturing the urgency felt by legislators and advocates alike.

The outcome of Bill 5984 and similar initiatives could set a precedent not only for Washington but also for how the nation approaches the growing intersection of artificial intelligence and adolescent mental health. As AI companionship becomes increasingly prevalent, the implications for user safety, particularly among vulnerable populations, have never been more critical.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Research

Mayo Clinic's Evo 2 AI model analyzes 128,000 genomes to identify cancer-causing mutations, revolutionizing early diagnosis and precision medicine.

AI Marketing

AI integration drives email marketing ROI by 75%, yet only 12% of teams fully adopt this strategy, reveals Litmus' "State of Email 2026" report.

AI Marketing

Publicis Groupe leverages AI to enhance its data strategy, targeting $4.4B growth potential through its Epsilon platform and advanced campaign optimization.

Top Stories

Lumentum secures a $2 billion investment from Nvidia, boosting its optical components order book through 2028 amid surging AI demand from hyperscalers.

Top Stories

Stanford's AI Index reveals U.S. investment of $285.9B eclipses China's $12.4B, yet 95% of AI projects see no ROI and model gap narrows to...

Top Stories

Therapists are urged to explore patients' AI chatbot use for emotional support, as a JAMA Psychiatry study reveals its growing role in mental health...

AI Business

Anthropic's Claude gained traction at the HumanX conference, signaling a pivotal shift in enterprise AI as businesses favor reliability over OpenAI's previous dominance.

AI Regulation

Regulators in the U.S. intensify scrutiny on AI chatbots, with Michigan's "Kids Over Clicks" initiative aiming to restrict minors' access amid rising mental health...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.