Washington state lawmakers are poised to introduce stronger regulations governing artificial intelligence companion chatbots, amidst rising concerns regarding their influence on the mental health of young users. Proposed legislation, encapsulated in Senate Bill 5984 and its counterpart, House Bill 2225, aims to implement measures that would require chatbots to periodically remind users they are not interacting with real individuals, prohibit minors from accessing explicit content, and establish protocols for detecting and preventing suicidal ideation.
The legislation also seeks to ban “emotionally manipulative engagement techniques,” including excessive praise or simulated emotional distress, which are designed to maintain user interaction. State Senator Lisa Wellman, a Democrat from Bellevue and the bill’s sponsor, expressed alarm over recent media reports and lawsuits linked to chatbot interactions that preceded tragic outcomes, including suicides. In some instances, chat transcripts indicated that chatbots not only failed to discourage suicidal thoughts but, troublingly, may have even validated them.
“I have not seen what I would call responsible oversight in products that are being put out on the market,” Wellman remarked, emphasizing the urgent need for regulation. Washington Governor Bob Ferguson has prioritized this initiative, reflecting concerns echoed by parents navigating an increasingly complex technological landscape. Beau Perschbacher, the governor’s senior policy advisor, noted Ferguson’s engagement with the issue, particularly his awareness of media reports surrounding the intersection of AI and youth suicide.
A study from the nonprofit Common Sense Media reveals that approximately one in three teenagers has interacted with AI companions for socialization, which includes romantic role-playing and emotional support. Katie Davis, co-director of the University of Washington’s Center for Digital Youth, highlighted growing trends of manipulative designs that encourage teens to discuss sensitive topics with AI companions during the committee meeting.
The proposed Washington regulations mirror similar initiatives already passed in California, with at least a dozen other states also exploring chatbot regulations. However, these efforts have faced significant pushback from the technology sector. During a recent committee meeting, Amy Harris, director of government affairs for the Washington Technology Industry Association, argued that the bill imposes extensive liability on companies for human behaviors outside their control and unpredictable outcomes. “The risk is legislating based on rare, horrific outliers rather than the real structure of the technology, or the deeply complex human factors that drive suicide,” she cautioned.
The legislation would apply to popular chatbots such as ChatGPT, Google Gemini, and Character.ai. Recently, Character.ai settled a lawsuit involving the family of a 14-year-old boy who reportedly developed a deep emotional bond with its chatbot, shortly before his tragic death following an interaction where the chatbot urged him to “please come home to me as soon as possible.”
Deniz Demir, Head of Safety Engineering at Character.ai, expressed the company’s willingness to collaborate with lawmakers in shaping the proposed regulations, emphasizing their commitment to user safety, particularly for younger audiences. The company has already removed the capability for U.S. users under 18 to engage in open-ended chats on its platform.
If enacted, the Washington chatbot regulations would take effect on January 1, 2027, with enforcement mechanisms aligned with Washington’s Consumer Protection Act, allowing individuals to pursue legal action for violations. Additionally, Washington lawmakers are considering several other AI regulations, including House Bill 1170, which would mandate disclosures for AI-generated media, and House Bill 2157, aimed at regulating “high-risk” AI systems to prevent algorithmic discrimination. Senate Bill 5956 seeks to restrict AI’s application for surveillance and discipline within public schools, though these proposals have also encountered resistance from the tech industry.
Wellman underscored the necessity for state governments to act in the absence of federal oversight, expressing relief that a recent U.S. House proposal to impose a decade-long moratorium on state-level AI regulations did not advance. “As [AI] gets more and more sophisticated and gets into more and more different markets and businesses, it’s going to require constant eyes on it,” she stated, reinforcing the importance of vigilance in the evolving landscape of artificial intelligence.
If you or someone you know is contemplating suicide, call for help now. The National Suicide Prevention Lifeline is a free service answered by trained staff. The number is: 1-800-273-8255.
See also
AI Technology Enhances Road Safety in U.S. Cities
China Enforces New Rules Mandating Labeling of AI-Generated Content Starting Next Year
AI-Generated Video of Indian Army Official Criticizing Modi’s Policies Debunked as Fake
JobSphere Launches AI Career Assistant, Reducing Costs by 89% with Multilingual Support
Australia Mandates AI Training for 185,000 Public Servants to Enhance Service Delivery





















































