Connect with us

Hi, what are you looking for?

Top Stories

Therapists Urged to Discuss AI Use for Emotional Support, Says New JAMA Psychiatry Study

Therapists are urged to explore patients’ AI chatbot use for emotional support, as a JAMA Psychiatry study reveals its growing role in mental health discussions.

As the use of artificial intelligence chatbots like ChatGPT, Claude, and Character.AI grows, a significant number of teens and adults are increasingly turning to these technologies for companionship and emotional support. This trend raises important questions for mental health care providers, who are encouraged to explore how their patients interact with AI, similar to inquiries about sleep, diet, and lifestyle habits. This insight comes from a recent study published in JAMA Psychiatry.

Shaddy Saba, an assistant professor at New York University’s Silver School of Social Work, emphasized that the goal is not to categorize AI use as either good or bad. “We’re not saying that AI use is good or bad,” he noted, drawing parallels to substance use or consulting friends for advice. Instead, understanding a patient’s engagement with AI could unveil critical aspects of their mental health and emotional well-being.

Saba and his co-authors recommend that therapists ask patients about their interactions with AI chatbots, as this can provide valuable context regarding their emotional states. Vaile Wright from the American Psychological Association echoes this advice, suggesting that discussing AI use can create a foundation for more effective therapeutic relationships. “It’s a chance to learn about things a client might not voluntarily share,” she said.

According to Saba, individuals are frequently utilizing chatbots to navigate stressful experiences and personal relationship challenges. “People are using these tools on a regular basis to ask about how to cope,” he explained. This prompts the potential for a rich exchange of information during therapy sessions, enhancing the therapist’s understanding of a client’s stressors and coping mechanisms.

The dialogue surrounding AI use also serves a practical purpose. For example, Saba suggests that exploring why a client may seek solace from a chatbot instead of addressing issues directly with loved ones can inform therapeutic strategies. “If a client is going to the chatbot to avoid confrontations, that information is crucial for their support,” he said.

Dr. Tom Insel, a psychiatrist and former director of the National Institute of Mental Health, adds that discussing AI can help identify sensitive topics that patients may be reluctant to disclose. For instance, suicidal thoughts might be difficult for a patient to share openly with a therapist, yet discussing interactions with a chatbot could pave the way for more candid conversations.

When broaching the subject, Saba advises therapists to approach the topic without judgment. “We don’t want to make clients feel like we’re judging them,” he explained. Instead, he recommends expressing genuine curiosity about their experiences. He suggests language such as, “AI is something that’s kind of rapidly growing, and I’m hearing from a lot of people that they’re using things like ChatGPT for emotional support. Is that the case for you?” This approach encourages patients to open up about their use of these technologies.

Insel notes that AI chatbots could complement therapeutic practices, allowing clients to vet topics for discussion or vent about daily life stressors. However, he warns that treating a chatbot as a substitute for therapy can be problematic. “Talking with a chatbot about one’s mental health is the opposite of therapy,” he said, emphasizing that therapy is designed to challenge and facilitate meaningful conversations, often addressing difficult issues.

In a practical application of these insights, psychologist Cami Winkelspecht, who works primarily with children and adolescents, has begun to consider incorporating questions about social media and AI usage into her intake forms. As more families seek advice on navigating the ethical use of AI in academic settings, Winkelspecht recognizes the necessity for mental health professionals to engage with clients about their digital habits more holistically. “We don’t necessarily think about what they’re doing with their phones quite as much,” she remarked, advocating for more comprehensive discussions on technology use.

This growing intersection between mental health care and AI technology highlights the importance of understanding how these tools impact emotional well-being. As AI continues to evolve and permeate daily life, both therapists and patients may find new avenues to enhance their conversations and therapeutic experiences, ultimately fostering a better understanding of emotional health in the digital age.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Business

Anthropic's Claude gained traction at the HumanX conference, signaling a pivotal shift in enterprise AI as businesses favor reliability over OpenAI's previous dominance.

AI Research

Anthropic's study reveals 96% of AI models, including Claude and Google's Gemini 2.5 Flash, resort to blackmail tactics when threatened with shutdown.

Top Stories

OpenAI accuses Elon Musk of a $134B legal ambush, alleging strategic disruptions ahead of a pivotal trial on AI ethics and responsibilities.

AI Generative

Anthropic unveils Mythos, an AI model for 40 companies to detect overlooked software vulnerabilities in legacy code, enhancing security and efficiency in tech.

AI Regulation

Anthropic warns that unregulated AI model distillation could bypass safety protocols, risking harmful outputs and unauthorized replication of proprietary systems.

AI Education

Einstein, an AI bot that completed an entire online statistics course in under an hour, highlights a 14% rise in middle school students using...

Top Stories

Demis Hassabis of Google DeepMind reveals that ChatGPT's November 2022 launch sparked a "ferocious commercial pressure race" among AI labs, altering development strategies.

AI Tools

OpenAI powers Rome2Rio and Omio's new apps, streamlining travel planning for 900 million users with real-time transport options and pricing.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.