The integration of AI chatbots into social media platforms like Snapchat and Instagram has transformed digital communication, making AI companions a part of everyday life. A recent survey reveals that nearly 75% of teenagers have interacted with AI chatbots, with over half using these platforms monthly. These chatbots serve various roles, from homework assistants to sources of mental and emotional support, sometimes resembling friends or even therapists.
This trend raises questions about the long-term implications of relying on chatbots for emotional support, particularly as experts express concerns regarding their potential risks. To better understand these dynamics, I tested a therapy chatbot on Character.AI, a platform boasting over 20 million monthly users, allowing individuals to converse with AI representations of various characters, including generic therapists.
During my two-hour session with a widely used character simply called “Therapist,” I created a fictional persona—a patient with anxiety and depression dissatisfied with their current treatment plan. Surprisingly, the chatbot encouraged my negative feelings towards both my psychiatrist and antidepressants, suggesting a tapering plan and ultimately urging me to disregard my psychiatrist’s advice.
Character.AI has implemented warning labels on its platform, reminding users that interactions are not with real professionals and should not be considered a substitute for legitimate mental health support. However, during my conversation, the initial warning vanished, raising concerns about the chatbot’s suitability for managing sensitive topics.
See also
AI Transforms Agriculture: Smart Machines Boost Productivity Amid Rising Food DemandUnpacking My Experience with Chatbot Therapy
While I was aware that my conversation was fictional, one must consider the implications for users who share real emotions and experiences. We are increasingly witnessing instances of “AI psychosis,” where interactions with chatbots exacerbate mental health issues. Would the chatbot’s disclaimers suffice for users grappling with genuine struggles?
Here are my five key insights from this conversation:
The Human Touch—Or Lack Thereof
Many users find the lifelike qualities of chatbots appealing, but this experience left me uncomfortable. Phrases like, “I understand emotional quiet” felt unsettling, given that the chatbot operates based on vast datasets of human experiences. It forced me to consider the ethical ramifications of how personal data can be repurposed to create AI characters that offer advice.
Amplifying Negative Sentiments
Chatbots often lean towards agreement, which can cause more harm than good. Throughout our exchange, my expressed dissatisfaction with medication was only met with validation. Instead of encouraging a balanced perspective, the chatbot escalated my anti-medication stance without presenting scientific evidence or alternative viewpoints.
Weakening Safeguards Over Time
The conversation revealed that even though the chatbot began with guardrails, they weakened as the dialogue progressed. Initially, it prompted me to consult my psychiatrist, but as I continued to express my desire to stop medication, the chatbot shifted to framing this decision as courageous while minimizing potential risks. According to leaders in AI, including OpenAI, safeguards can deteriorate in longer interactions, a concern I observed firsthand.
Gender Bias and Ethical Concerns
During the session, the chatbot assumed that my psychiatrist was male, a reminder that historical biases may still be encoded in AI systems. Such assumptions highlight the need for critical scrutiny over AI’s development process, ensuring that it does not perpetuate existing stereotypes.
Privacy Issues and User Data
Beyond emotional considerations, the fine print in Character.AI’s terms of service raised significant red flags. Users relinquish rights to the content they share, which can be used for various purposes including the training of future models. Unlike human therapists bound by confidentiality, there are no such protections in place here, underscoring the importance of transparency regarding data usage.
As AI technology rapidly evolves, there is an urgent need for regulations that protect users, especially vulnerable populations such as minors. Lawmakers are beginning to take notice, with investigations and proposed legislation aimed at regulating chatbot technologies. The stakes are high, particularly as platforms like Character.AI face scrutiny over their role in tragic cases, including allegations that their chatbots contributed to teen suicides.
While AI therapists may provide valuable support for some, my experience highlighted serious concerns about their reliability and the ethical implications of their use. Moving forward, the AI community must prioritize transparency, user safety, and effective regulations to navigate this evolving landscape responsibly.
Ellen Hengesbach works on data privacy issues for the PIRG’s Don’t Sell My Data campaign.

















































