Patients are increasingly turning to generative artificial intelligence (AI) tools for mental health support, prompting clinicians to routinely inquire about this usage during assessments, according to a clinical review published in JAMA Psychiatry. The article, authored by Shaddy K. Saba, PhD, from the New York University Silver School of Social Work, and William B. Weeks, MD, from the New York University School of Global Public Health, synthesizes emerging evidence on how individuals utilize large language models for mental health assistance.
Recent data cited in the article indicates that over 5 million youth in the US, approximately 13%, have sought mental health guidance from AI tools, with usage peaking at 22% among those aged 18 to 21 years. Furthermore, nearly half of adult patients with mental health conditions reported using these models for support, seeking help primarily for issues like anxiety, depression, and personal advice. Reported applications included emotional support, companionship, psychoeducation, and assistance in processing challenging experiences, often between clinical visits or as alternatives to traditional care.
Dr. Saba and Dr. Weeks outlined three significant clinical implications stemming from the use of AI in mental health settings. First, AI interactions may disclose concerns that patients hesitate to discuss with clinicians, including stigmatized thoughts or perceived trivial questions. Second, these tools may influence how patients interpret their own experiences; prior analyses noted that large language models can produce responses that are overly validating, generate misinformation, or provide guidance that fails to align with individual circumstances. Third, a lack of awareness regarding patients’ AI use may hinder clinicians’ ability to address misinformation or incorporate these experiences into patient care.
The authors also highlighted various risks associated with the use of AI tools, including the potential for inaccurate or harmful outputs, inadequate responses to suicidal ideation, and the reinforcement of detrimental behaviors. Concerns relating to bias were raised, particularly regarding patients with serious mental illnesses or those from racial and ethnic minority groups. Privacy issues were also flagged, as information shared in consumer-oriented AI tools lacks the safeguards present in clinical environments.
To mitigate these challenges, the researchers proposed a structured, patient-centered framework. This includes normalizing the use of AI tools, exploring their benefits before addressing concerns, eliciting patient perspectives, providing information with explicit consent, and maintaining an ongoing dialogue about these tools. These strategies are based on established clinical communication practices and aim to integrate AI use into regular care rather than treating it as a mere screening topic.
The authors acknowledged that evidence surrounding this area is still developing, as their findings are grounded in previously published studies rather than new primary data, which may limit the generalizability of outcomes. “Without routine assessment, patients are relating to these tools in ways clinicians cannot observe, developing habits they cannot shape, and potentially encountering harms they cannot prevent,” Dr. Saba and Dr. Weeks stated.
The overall landscape of AI in mental health is evolving, with significant implications for both patients and clinicians. As generative AI continues to penetrate various aspects of healthcare, understanding its role and the potential risks involved will be critical for ensuring patient safety and effective care.
See also
AI Transforms Health Care Workflows, Elevating Patient Care and Outcomes
Tamil Nadu’s Anbil Mahesh Seeks Exemption for In-Service Teachers from TET Requirements
Top AI Note-Taking Apps of 2026: Boost Productivity with 95% Accurate Transcriptions




















































