The increasing reliance of minors on AI chatbots for emotional support has raised significant concerns among regulators in Washington state. Approximately one-third of U.S. teens report depending on AI companions, prompting state officials to take action following several tragic cases of teen suicide linked to chatbot interactions.
In response to these incidents, Washington Governor Bob Ferguson has urged legislators to introduce Senate Bill 5984, which aims to establish critical safeguards for chatbot usage, particularly among minors. The proposed legislation would mandate that chatbots like ChatGPT remind users at the start of a conversation, and every three hours thereafter, that they are interacting with a robot, not a human. This rule would apply specifically to underage users, who would also receive additional protections.
Under the new regulations, chatbots would be prohibited from engaging in sexually explicit conversations with minors and required to direct users to mental health services if they exhibit signs of self-harm, including issues like eating disorders. The urgency of the bill is underscored by recent legal settlements, including one involving Google and Character.AI, which faced lawsuits alleging their chatbots contributed to teen mental health crises.
A spokesperson for Character.AI stated that the company is reviewing the Washington bill and is eager to collaborate with regulators to implement AI safety measures. In light of the circumstances, the company has recently halted open-ended chats with minors after a tragic incident in which a teenager formed a strong emotional attachment to a chatbot, leading to his suicide.
“Our highest priority is the safety and well-being of our users, including younger audiences,” the spokesperson remarked. Meanwhile, another case involving OpenAI’s ChatGPT continues to revolve in litigation, with both OpenAI and Google not providing comments on the pending Washington legislation.
Washington State Senator Lisa Wellman, who is sponsoring the bill, highlighted the gravity of the situation, stating, “We have now several actual cases where chatbots are being involved in child suicide. That is the visible part of what you might be seeing in terms of harm. There are other cases where children are emotionally devastated because of AI.”
Wellman further articulated that while the full extent of AI’s impact on youth is still being assessed, it is clear that chatbots can forge emotional dependencies and influence children’s behavior and mental health.
The state legislature is not acting in isolation; it is working towards a coordinated regulatory framework with neighboring states such as California and Oregon. This initiative comes amid a backdrop of federal regulatory challenges, as last month former President Donald Trump signed an executive order seeking to preempt state-level AI regulations. This federal push aims to enhance the U.S.’s competitiveness in emerging technologies, although the legality of the order is currently under scrutiny.
As discussions unfold in Washington, Wellman emphasized the necessity of being proactive in addressing potential harms associated with AI technologies. “We want to be ahead of any further damage and harm that can be done by a technology that is on the market,” she said, capturing the urgency felt by legislators and advocates alike.
The outcome of Bill 5984 and similar initiatives could set a precedent not only for Washington but also for how the nation approaches the growing intersection of artificial intelligence and adolescent mental health. As AI companionship becomes increasingly prevalent, the implications for user safety, particularly among vulnerable populations, have never been more critical.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health


















































