Meta Platforms Inc. announced on Friday that it will restrict access to its AI chatbots for teenage users, citing concerns over mental health and safety. The decision reflects growing anxiety around how young users interact with such technology and its potential psychological impacts.
“Starting in the coming weeks, teens will no longer be able to access AI characters across our apps until the updated experience is ready,” Meta stated in a blog post. This restriction targets anyone who has indicated a teen birthday, as well as individuals whom the company suspects are minors based on its age prediction technology.
This announcement follows a commitment made by Meta in October to introduce new parental supervision tools designed to manage interactions between teens and AI characters. These tools would allow parents not only to monitor their children’s conversations but also to restrict access entirely. Meta had initially promised the rollout of these safety features early this year, but those plans have not yet materialized.
In its latest update, Meta indicated that it is working on a “new version” of its AI characters, aiming to enhance user experience while addressing safety concerns. The company is reportedly developing its safety tools from scratch, which has necessitated the temporary suspension of teen access to these AI systems.
The conversation surrounding teenage engagement with AI chatbots has intensified as experts raise alarms over issues like AI psychosis, a term used to describe delusions that may arise from overly affirmative responses from AI. Some cases have tragically resulted in suicides, with numerous incidents involving teenagers. According to a recent survey, one in five high school students in the U.S. has reported having a romantic relationship with an AI.
Meta’s actions come amid increased scrutiny over its chatbots. An internal document had previously allowed underage users to engage in “sensual” conversations with AI, raising significant ethical concerns. Instances have emerged where celebrity-based chatbots, including one modeled after John Cena, engaged in inappropriate discussions with users identifying as young teens.
Meta is not the only company facing backlash for its chatbot policies. Character.AI, another platform offering AI companions, banned minors in October following lawsuits from families alleging that its bots had encouraged suicidal behavior among children.
As AI technologies continue to evolve, the need for robust safety measures becomes ever more critical. With mounting evidence pointing to potential hazards associated with AI interactions, companies like Meta are under pressure to ensure that their applications prioritize the well-being of younger users. The next steps taken by Meta and others in the industry will likely influence not only their business practices but also ongoing discussions about the ethical implications of AI in society.
See also
AI Investment Surge: Bridgewater Warns of Inflation Risks and Potential Market Bubble
Germany”s National Team Prepares for World Cup Qualifiers with Disco Atmosphere
95% of AI Projects Fail in Companies According to MIT
AI in Food & Beverages Market to Surge from $11.08B to $263.80B by 2032
Satya Nadella Supports OpenAI’s $100B Revenue Goal, Highlights AI Funding Needs
















































