As generative artificial intelligence becomes increasingly integrated into the lives of young people, new research indicates that its impact on mental health may depend more on the context of its use rather than how frequently it is used. A report by Surgo Health, in partnership with Young Futures and the Jed Foundation, reveals that approximately 12 percent of youth experiencing mental health difficulties have turned to generative AI to express their feelings.
While many reported short-term emotional relief, this did not consistently translate into long-term positive outcomes. The study found that when AI was incorporated into a broader support network, the benefits were enhanced; conversely, when it served as a replacement for human interaction, negative or neutral experiences became more prevalent. Katie Hurley, vice president of community initiatives at the Jed Foundation, emphasized the need for greater “algorithmic transparency” from AI platforms, highlighting a growing disconnect between young users and the systems that govern their digital interactions.
“It’s worth noting that these systems are designed to be validating and comforting,” said Hurley. “So when someone lacks parental support, peer support or any caring adult in their life, and they turn to one of these AI chatbots and feel understood, it’s natural they’ll keep going back for more.” The research utilized national survey data from over 1,300 youth aged 13 to 24, examining how their engagement with AI is shaped by varying levels of social support, stress, and access to care.
Hannah Kemp, chief solutions officer at Surgo Health, noted that young people’s interactions with AI often reflect their offline circumstances. “Whatever is happening in people’s offline lives, their AI patterns often mirror that,” Kemp explained. “When folks face barriers to accessing care, AI can become a substitute. And when they carry a lot of anxiety about the future, they’re more likely to become emotionally entangled with it.”
The report classifies youth AI engagement into six segments based on usage frequency. Approximately 10 percent are labeled as “Low-Use Anxious Skeptics,” who are distressed and wary of AI. Another 32 percent fall into the “Thriving Light-Touch Pragmatists” category, maintaining a healthy, balanced relationship with AI. Roughly 7 percent are categorized as “Worried Strivers,” who view AI as destabilizing, while 9 percent are “Emotionally Entangled Superusers,” relying on AI for connection when offline support is lacking.
Another 10 percent belong to the “High-Hope, High-Use Skill-Builders” group, using AI as a tool for learning and creativity. The final segment, “Curious Low-Concern Learners,” also comprises 10 percent, comprised of socially grounded youth using AI for exploration and problem-solving. “Our segmentation shows there are very different patterns,” Kemp stated. “If we really want to understand how young people are using and engaging with AI, we have to move beyond screen time and look at the different segments of youth and what support they actually need.”
About 44 percent of older youth, aged 18 to 24, reported feeling they have little control over how AI affects them, compared to 29 percent of younger youth aged 13 to 17. The study also found that only 26 percent of older youth identify as “Thriving Light-Touch Pragmatists,” in contrast to 41 percent of younger youth, indicating older youth may engage with AI in a less balanced manner. Furthermore, about 25 percent of older youth never use AI, compared to 14 percent of their younger counterparts, suggesting a deeper skepticism among older generations.
Hurley pointed to the COVID-19 pandemic as a potential factor in older youth’s less favorable relationship with AI, noting that many were in crucial stages of social development during lockdowns. “They were taught to go online to get help… and for better or for worse, that became a go-to strategy,” she said. This reliance on online resources has implications for how young people pose questions and seek information in the AI space.
The report also highlights significant demographic differences in AI engagement. Black youth and those from families receiving government assistance are more likely to be “High-Hope, High-Use Skill-Builders,” while Hispanic youth tend to be “Emotionally Entangled Superusers.” LGBTQ+ youth more frequently align with the “Low-Use Anxious Skeptics” segment. Adele Wang, associate director of research and data science at Surgo Health, noted that these trends indicate a critical opportunity to provide support for those lacking traditional care services.
The findings stress the importance for public health leaders, educators, and policymakers to adopt targeted strategies, moving beyond one-size-fits-all approaches to youth and AI. Kemp highlighted the potential of generative AI to act as a bridge to offline care rather than a standalone solution. “Could it help someone quickly find a therapist who takes their insurance or a community clinic that offers free counseling?” she asked. “That might help smooth some of that friction.”
Ultimately, Hurley argued that limiting AI is not the answer; rather, understanding the underlying needs driving young people to these platforms is essential. “It’s really important to elevate youth voices and to listen and learn from them,” she concluded. “They’re using these things in ways that make sense to them, so it’s important to meet them where they are and hear what they’re trying to get out of it.”
See also
Sam Altman Praises ChatGPT for Improved Em Dash Handling
AI Country Song Fails to Top Billboard Chart Amid Viral Buzz
GPT-5.1 and Claude 4.5 Sonnet Personality Showdown: A Comprehensive Test
Rethink Your Presentations with OnlyOffice: A Free PowerPoint Alternative
OpenAI Enhances ChatGPT with Em-Dash Personalization Feature





















































