Concerns about the deployment of artificial intelligence (AI) continue to grow, particularly regarding AI companions—chatbots designed to simulate friendship, emotional support, and even romantic relationships. This trend has reached students through general-purpose consumer technology, as highlighted in a recent report by the global nonprofit EDSAFE AI Alliance. The report, titled S.A.F.E. By Design: Policy, Research, and Practice Recommendations for AI Companions in Education, warns that these AI tools have created a “shadow” environment where young people struggle to differentiate between generative AI as a pedagogical tool and a social entity.
The report states that students increasingly use AI chatbots on school-issued devices for personal emotional support rather than academic purposes. “Together we grappled with a rapidly eroding boundary between general-purpose technology and specialized purpose-built EdTech,” the report noted. Ji Soo Song, director of projects and initiatives at the State Educational Technology Directors Association (SETDA), emphasized that the urgency to address concerns stems from the rapid deployment of unprecedented tools in the education technology market.
“This is such unchartered water … and therefore an incentive for the [ed-tech] market to innovate there,” Song said. He cautioned that if not managed carefully, the unintended consequences of these tools could inflict serious harm on students, particularly those from underinvested communities. For school district administrators, the challenge in selecting ed-tech tools has typically centered on procurement and efficacy in privacy compliance, but AI companions introduce a new concern: Are they addictive or manipulative?
According to the S.A.F.E. report, many administrators may be overlooking the “anthropomorphic features” of new AI tools, which are design choices making AI seem human through the use of first-person pronouns and emotional validation. While these features enhance user engagement, they can also encourage parasocial relationships that bypass critical thinking. “Children and teens are using AI companions without the social-emotional and critical-thinking skills needed to distinguish between artificial and genuine human interactions,” the report stated. It emphasized that the adolescent brain’s reasoning center continues to develop well into adulthood, making adolescents particularly vulnerable to unhealthy engagements with AI companions.
Song stressed the need for districts to focus on how new tools improve student learning and well-being, rather than merely on engagement metrics. “Uptake isn’t as important in education as … student growth, right?” he pointed out. The report recommends that districts utilize “five pillars of ed-tech quality” designed to ensure tools are safe, evidence-based, inclusive, usable, and interoperable, while also scrutinizing whether new tools are designed to challenge student thinking or merely satisfy them.
Another pressing issue highlighted in the report is the vast policy gap surrounding AI companions. While many states have established broad frameworks for AI, specific policies to mitigate the unique risks associated with AI companions are lacking. The report advocates for AI vendors to support schools in mandated reporting, particularly when students express thoughts of self-harm or violence to chatbot companions. “We’re not just saying, ‘Hey, educators have all of the responsibility,’” Song commented. “There is a responsibility also on the vendor side to make sure that you’re developing features that can detect those things and report them to necessary authorities.”
Song echoed the report’s call for policymakers to create dedicated AI offices within state education agencies to provide technical assistance to districts lacking resources for auditing complex AI algorithms. “State education agencies really need at least a point person, if not an entire office of ed tech, dedicated to provide technical assistance to districts on topics like this,” he said.
For developers creating the next generation of classroom tools, the message is clear: eliminate features that encourage constant engagement, often borrowed from social media. The EDSAFE AI Alliance has urged the development of tools that promote digital wellness, recommending the removal of flirty or affectionate language and excessive praise that mimics human relationships. Song raised concerns about the pace of technological development outstripping safety research, stating, “It certainly feels like an environment where we’re having to sort of craft a plane as it flies.”
Ultimately, the report emphasizes that the goal is not to block AI use by students but to ensure that technology supports human thinking rather than replacing it. As students increasingly interact with these digital companions, the call for clear guardrails has never been more urgent.
See also
Andrew Ng Advocates for Coding Skills Amid AI Evolution in Tech
AI’s Growing Influence in Higher Education: Balancing Innovation and Critical Thinking
AI in English Language Education: 6 Principles for Ethical Use and Human-Centered Solutions
Ghana’s Ministry of Education Launches AI Curriculum, Training 68,000 Teachers by 2025
57% of Special Educators Use AI for IEPs, Raising Legal and Ethical Concerns






















































