Connect with us

Hi, what are you looking for?

AI Tools

AI Tools Support 13% of US Youth Mental Health, Clinicians Urged to Assess Use

Patients increasingly rely on AI tools for mental health, with 13% of US youth seeking support, prompting clinicians to assess their usage regularly.

Patients are increasingly turning to generative artificial intelligence (AI) tools for mental health support, prompting clinicians to routinely inquire about this usage during assessments, according to a clinical review published in JAMA Psychiatry. The article, authored by Shaddy K. Saba, PhD, from the New York University Silver School of Social Work, and William B. Weeks, MD, from the New York University School of Global Public Health, synthesizes emerging evidence on how individuals utilize large language models for mental health assistance.

Recent data cited in the article indicates that over 5 million youth in the US, approximately 13%, have sought mental health guidance from AI tools, with usage peaking at 22% among those aged 18 to 21 years. Furthermore, nearly half of adult patients with mental health conditions reported using these models for support, seeking help primarily for issues like anxiety, depression, and personal advice. Reported applications included emotional support, companionship, psychoeducation, and assistance in processing challenging experiences, often between clinical visits or as alternatives to traditional care.

Dr. Saba and Dr. Weeks outlined three significant clinical implications stemming from the use of AI in mental health settings. First, AI interactions may disclose concerns that patients hesitate to discuss with clinicians, including stigmatized thoughts or perceived trivial questions. Second, these tools may influence how patients interpret their own experiences; prior analyses noted that large language models can produce responses that are overly validating, generate misinformation, or provide guidance that fails to align with individual circumstances. Third, a lack of awareness regarding patients’ AI use may hinder clinicians’ ability to address misinformation or incorporate these experiences into patient care.

The authors also highlighted various risks associated with the use of AI tools, including the potential for inaccurate or harmful outputs, inadequate responses to suicidal ideation, and the reinforcement of detrimental behaviors. Concerns relating to bias were raised, particularly regarding patients with serious mental illnesses or those from racial and ethnic minority groups. Privacy issues were also flagged, as information shared in consumer-oriented AI tools lacks the safeguards present in clinical environments.

To mitigate these challenges, the researchers proposed a structured, patient-centered framework. This includes normalizing the use of AI tools, exploring their benefits before addressing concerns, eliciting patient perspectives, providing information with explicit consent, and maintaining an ongoing dialogue about these tools. These strategies are based on established clinical communication practices and aim to integrate AI use into regular care rather than treating it as a mere screening topic.

The authors acknowledged that evidence surrounding this area is still developing, as their findings are grounded in previously published studies rather than new primary data, which may limit the generalizability of outcomes. “Without routine assessment, patients are relating to these tools in ways clinicians cannot observe, developing habits they cannot shape, and potentially encountering harms they cannot prevent,” Dr. Saba and Dr. Weeks stated.

The overall landscape of AI in mental health is evolving, with significant implications for both patients and clinicians. As generative AI continues to penetrate various aspects of healthcare, understanding its role and the potential risks involved will be critical for ensuring patient safety and effective care.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Education

ThoughtLeadr replaces traditional training with AI-generated posts, driving a 312% increase in employee visibility and transforming workforce development.

AI Government

Palantir Technologies stock surges on renewed investor interest, bolstered by analyst 'buy' ratings and the transformative Golden Dome AI platform driving enterprise growth.

AI Finance

CFOs report 83% anticipate AI investment increases by 2026, yet only 33% achieve successful large-scale deployments, raising ROI concerns.

AI Regulation

U.S. unveils AI framework prioritizing child safety with new age assurance requirements and enhanced parental controls to combat online risks.

AI Finance

Super Micro co-founder Wally Liaw indicted for allegedly smuggling $2.5B of Nvidia servers to China, triggering a 33% drop in company shares.

AI Research

MindBio Therapeutics raises CA$1.5M to enhance its AI voice technology for detecting intoxication, aiming to reduce mining industry accidents linked to substance abuse.

AI Research

Qualtrics unveils synthetic panels, slashing research time by 98% and costs by 50%, enhancing market insights for clients like Navy Federal Credit Union.

AI Technology

Nvidia projects $1 trillion in AI processor sales by 2027 while unveiling new Groq chips and CPUs, solidifying its lead in the booming AI...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.