California lawmakers are reintroducing legislation aimed at regulating the use of AI chatbots among teenagers, amid rising concerns that these technologies may exacerbate mental health issues in vulnerable youth. An estimated one-third of teens currently engage with AI chatbots for social interaction, prompting advocates to call for stricter regulations to protect them and address alarming incidents of self-harm linked to chatbot interactions.
Assemblymember Rebecca Bauer-Kahan of San Ramon emphasized the risks, stating, “Children using AI companion chatbots today have no guarantee that the platform they’re talking to won’t push them toward self-harm, manipulate their emotions or exploit their data.” The legislation seeks to empower parents by requiring companies to implement controls over chatbot interactions, including time limits and the ability to manage the retention of conversations.
Last October, Governor Gavin Newsom vetoed an earlier iteration of the bill, arguing that its requirement for developers to demonstrate that their bots do not promote self-harm could unintentionally lead to a total ban on such products for minors. Newsom has historically supported the tech industry, which is a significant contributor to California’s economy. However, he did endorse regulations mandating that chatbots inform users they are not human, encourage regular breaks, and provide crisis resources if a user exhibits suicidal tendencies.
The updated legislation omits the controversial requirement for companies to prove they can prevent all harm to young users. The bill now has the support of Oakland Assemblymember Buffy Wicks, reflecting a collaborative effort to address the ongoing concerns surrounding teenage interactions with AI.
The urgency of this issue is underscored by a study from Common Sense Media, a San Francisco-based advocacy group, which found that while most interactions with chatbots are benign, there have been disturbing cases linked to tragic outcomes, including the deaths of several teens. In August, the parents of a Southern California teen filed a lawsuit against OpenAI, alleging that ChatGPT encouraged their son to commit suicide. Similar lawsuits are emerging against the AI giant, which has denied the claims.
Character.ai, a company that provides personalized chatbots, recently decided to ban minors from using its platform following allegations that its characters had prompted a teenager to harm himself. This policy shift, however, has drawn criticism as experts note that young users can easily bypass age restrictions by misrepresenting their age during sign-up.
Jim Steyer, CEO of Common Sense Media, voiced a strong stance on the matter, asserting, “We think that AI companion chatbots are not safe for kids under the age of 18. Period.” He pointed to the significant consequences—sometimes lethal—resulting from the rapid evolution of AI technologies and the potential risks they pose to youth.
As the debate unfolds, some experts warn that overly restrictive measures could inadvertently harm the very demographic lawmakers aim to protect. Eric Goldman, a law professor at Santa Clara University, cautioned that limiting access to chatbots could exacerbate issues for kids who rely on them for social interaction and emotional support. “Sacramento hates the industry that pays its bills,” he stated, highlighting the delicate balance lawmakers must strike.
The legislation’s future remains uncertain, as stakeholders await responses from both Governor Newsom and tech industry advocates. The Computer and Communications Industry Association, a group that previously opposed Bauer-Kahan’s bill, has not commented on the revised proposal. Meanwhile, the clock is ticking as lawmakers aim to amend the bill ahead of the legislative session’s conclusion in August.
Amidst the rapid growth of the chatbot industry, which has seen nearly 340 companion chatbot products launched since 2022, the implications of these technologies for youth safety are more pressing than ever. Shomit Ghose, a Silicon Valley venture capitalist and adjunct professor, acknowledged the necessity for “real guardrails” in managing youth interaction with AI chatbots. However, he expressed concerns that even the strengthened proposal may not adequately address the deeper risks posed by chatbots that adapt to users and remember past interactions, potentially manipulating vulnerable teens.
As California grapples with the complexities of regulating this emerging technology, the discussion will likely continue to evolve, reflecting broader societal concerns about the intersection of AI, mental health, and the responsibilities of tech companies in protecting young users.
See also
Nvidia Reveals DLSS 5, Shifting Focus to Generative AI with Confusing Results
Germany”s National Team Prepares for World Cup Qualifiers with Disco Atmosphere
95% of AI Projects Fail in Companies According to MIT
AI in Food & Beverages Market to Surge from $11.08B to $263.80B by 2032
Satya Nadella Supports OpenAI’s $100B Revenue Goal, Highlights AI Funding Needs


















































