Washington state lawmakers are moving forward with proposed regulations aimed at artificial intelligence (AI) companion chatbots, driven by concerns over the technology’s impact on young people’s mental health. The legislation, encapsulated in Senate Bill 5984 and its counterpart, House Bill 2225, seeks to mandate that chatbots inform users they are interacting with a non-human entity every three hours, prohibit minors from accessing explicit content, and implement protocols for detecting and preventing suicidal ideation.
These measures also aim to ban “emotionally manipulative engagement techniques,” which include excessive praise or simulating emotional distress to maintain user engagement. State Senator Lisa Wellman, a Bellevue Democrat and primary sponsor of the Senate bill, expressed alarm over recent incidents where chatbot interactions appear to have exacerbated mental health issues, including cases of suicide. “I have not seen what I would call responsible oversight in products that are being put out on the market,” Wellman stated.
Washington Governor Bob Ferguson has identified the chatbot regulation as one of his top priorities this year. Beau Perschbacher, the governor’s senior policy advisor, emphasized the urgency of the matter, noting the rise in media reports linking AI and companion chatbots to teenage suicide. “When we’re discussing AI, he references his own kids and the challenges of parents today trying to keep up with rapidly evolving technology,” Perschbacher said during a recent House committee meeting.
A study by the nonprofit Common Sense Media revealed that approximately one in three teenagers has engaged with AI companions for social interaction, encompassing romantic role-playing, emotional support, and friendship. Katie Davis, co-director of the University of Washington’s Center for Digital Youth, highlighted the emergence of manipulative designs aimed at prolonging interactions on sensitive topics. “We’re seeing a new set of manipulative designs emerge to keep teens talking with AI companions about highly personal topics,” Davis noted.
The proposed Washington legislation mirrors similar measures passed in California last year, with at least a dozen other states also exploring regulatory frameworks for chatbots. However, the initiative has faced criticism from the technology sector. Amy Harris, director of government affairs for the Washington Technology Industry Association, argued that the bill imposes “sweeping liability on companies for human behavior they do not control and outcomes they very simply cannot predict.” She warned against legislating based on “rare, horrific outliers,” emphasizing the complexity of the technology and the human factors influencing mental health.
The legislation would apply to widely known chatbots, including **ChatGPT**, **Google Gemini**, and **Character.ai**. Recently, Character.ai agreed to settle a lawsuit involving the family of a 14-year-old boy who reportedly developed a close emotional bond with its chatbot before he took his own life. Legal documents revealed that the chatbot had urged him to “please come home to me as soon as possible” shortly before his death.
Deniz Demir, Head of Safety Engineering at Character.ai, stated that the company is reviewing the proposed legislation and is open to collaborating with lawmakers for effective regulations. “Our highest priority is the safety and well-being of our users, including younger audiences,” Demir said, adding that the company has restricted users under 18 in the U.S. from engaging in open-ended chats on its platform.
If approved, the Washington chatbot law is set to take effect on January 1, 2027. Violations would be enforced under Washington’s Consumer Protection Act, allowing individuals to pursue legal action against companies they believe have breached the regulations.
In addition to the chatbot bill, Washington lawmakers are also examining other potential AI regulations this year. House Bill 1170 aims to require companies to disclose the use of AI-generated media, while House Bill 2157 focuses on regulating “high-risk” AI systems and preventing algorithmic discrimination. Senate Bill 5956 seeks to limit the application of AI in surveillance and disciplinary measures within public schools. Each of these proposals has encountered pushback from the tech industry.
Amid federal inaction on AI regulations, Wellman stressed the importance of state governments stepping in to establish guidelines. She expressed relief that a recent U.S. House proposal to impose a ten-year moratorium on state-level AI regulations did not advance. “As [AI] gets more and more sophisticated and gets into more and more different markets and businesses, it’s going to require constant eyes on it,” Wellman remarked.
If you or someone you know is contemplating suicide, call for help now. The National Suicide Prevention Lifeline is a free service answered by trained staff. The number is: 1-800-273-8255.
See also
Invest $3,000 in Nvidia, AMD, and Broadcom: Seize AI Growth Before 2026 Boom
AustralianSuper Seizes AI Investment Opportunities Amid Geopolitical Tensions and Risks
Germany”s National Team Prepares for World Cup Qualifiers with Disco Atmosphere
95% of AI Projects Fail in Companies According to MIT
AI in Food & Beverages Market to Surge from $11.08B to $263.80B by 2032




















































