Data from Securly, a provider of internet filtering and safety services, reveals that approximately one in five interactions between students and generative artificial intelligence (AI) involved behaviors like cheating, self-harm, and bullying. The analysis, which examined nearly 1.2 million student interactions across more than 1,300 school districts from December 1, 2025, to February 20, 2026, found that about 2% of these interactions raised red flags for potential violence or cyberbullying. The findings underscore the complexities of integrating AI tools into educational settings.
While the data highlights concerning trends, Tammy Wincup, CEO of Securly, noted that the majority of interactions were appropriate, with roughly 80% aligning with district policies on AI usage. This observation suggests that when educational institutions establish clear guidelines, students tend to adhere to them. Wincup stated, “When a district actually sets some guardrails and policies around their AI usage in schools, 80% of the conversations happening are within the district’s policies.” This positive outcome reflects the potential for AI to enhance educational experiences when used responsibly.
The Securly analysis provides a unique perspective on student interactions with AI, diverging from traditional research methods that often rely on self-reported data. As Jeremy Roschelle, co-executive director of learning science research for Digital Promise, remarked, “That’s why it’s fascinating.” In November, Securly introduced a feature that allows district officials to define parameters for AI use, similar to how they filter specific websites. This feature enables large language models to redirect inappropriate queries, helping to maintain safety and compliance with educational standards.
Notably, 95% of deflected queries stemmed from students attempting to use AI for academic assignments, a trend Wincup described as expected. She anticipates that students will experiment with the boundaries set around AI tools. A smaller percentage of flagged interactions—2%—related to gaming, with less than 1% addressing sexual content or firearms. In total, these inappropriate interactions accounted for over 24,000 queries, underscoring the importance of vigilance in monitoring AI usage. Some queries raised significant safety concerns, including one student seeking assistance from AI to compose an email detailing suicidal thoughts.
Securly’s findings indicate a higher rate of potentially unsafe AI interactions—2%—compared to 0.4% for traditional internet searches. Wincup suggested that this discrepancy may stem from Securly’s extensive experience in identifying dangerous online searches, while its work with AI is still developing. Roschelle expressed curiosity about the nature of the remaining 80% of interactions deemed appropriate and their impact on students’ learning. “What we want to do is make sure [AI] is not just appropriate, but is actually valuable for student learning,” he said.
The analysis also sheds light on the preferences of students regarding AI tools. Securly found that ChatGPT was the most commonly used platform, accounting for 42% of interactions, followed by Securly’s own AI Chat at 28%, and Google’s Gemini at 21%. Other educational technology tools with embedded AI features, such as MagicSchool and SchoolAI, made up the remaining 9%. While these numbers are not nationally representative, Wincup believes that major AI platforms are likely prevalent in various districts.
With the rise of AI in education, technology leaders are finding themselves in a new role. Wincup remarked, “They’re no longer just buying things and setting things up like this.” She emphasized the need for visibility to facilitate informed decisions not only about technology but also about pedagogy and student learning. As educational institutions continue to navigate the implications of AI, striking a balance between fostering innovation and ensuring student safety will be crucial for successful integration.
See also
Andrew Ng Advocates for Coding Skills Amid AI Evolution in Tech
AI’s Growing Influence in Higher Education: Balancing Innovation and Critical Thinking
AI in English Language Education: 6 Principles for Ethical Use and Human-Centered Solutions
Ghana’s Ministry of Education Launches AI Curriculum, Training 68,000 Teachers by 2025
57% of Special Educators Use AI for IEPs, Raising Legal and Ethical Concerns





















































