Connect with us

Hi, what are you looking for?

AI Education

1 in 5 Student AI Interactions Flagged for Cheating, Self-Harm, and Bullying, Securly Reports

Securly reports that 1 in 5 student interactions with AI involve cheating, self-harm, or bullying, highlighting urgent safety concerns in education.

Data from Securly, a provider of internet filtering and safety services, reveals that approximately one in five interactions between students and generative artificial intelligence (AI) involved behaviors like cheating, self-harm, and bullying. The analysis, which examined nearly 1.2 million student interactions across more than 1,300 school districts from December 1, 2025, to February 20, 2026, found that about 2% of these interactions raised red flags for potential violence or cyberbullying. The findings underscore the complexities of integrating AI tools into educational settings.

While the data highlights concerning trends, Tammy Wincup, CEO of Securly, noted that the majority of interactions were appropriate, with roughly 80% aligning with district policies on AI usage. This observation suggests that when educational institutions establish clear guidelines, students tend to adhere to them. Wincup stated, “When a district actually sets some guardrails and policies around their AI usage in schools, 80% of the conversations happening are within the district’s policies.” This positive outcome reflects the potential for AI to enhance educational experiences when used responsibly.

The Securly analysis provides a unique perspective on student interactions with AI, diverging from traditional research methods that often rely on self-reported data. As Jeremy Roschelle, co-executive director of learning science research for Digital Promise, remarked, “That’s why it’s fascinating.” In November, Securly introduced a feature that allows district officials to define parameters for AI use, similar to how they filter specific websites. This feature enables large language models to redirect inappropriate queries, helping to maintain safety and compliance with educational standards.

Notably, 95% of deflected queries stemmed from students attempting to use AI for academic assignments, a trend Wincup described as expected. She anticipates that students will experiment with the boundaries set around AI tools. A smaller percentage of flagged interactions—2%—related to gaming, with less than 1% addressing sexual content or firearms. In total, these inappropriate interactions accounted for over 24,000 queries, underscoring the importance of vigilance in monitoring AI usage. Some queries raised significant safety concerns, including one student seeking assistance from AI to compose an email detailing suicidal thoughts.

Securly’s findings indicate a higher rate of potentially unsafe AI interactions—2%—compared to 0.4% for traditional internet searches. Wincup suggested that this discrepancy may stem from Securly’s extensive experience in identifying dangerous online searches, while its work with AI is still developing. Roschelle expressed curiosity about the nature of the remaining 80% of interactions deemed appropriate and their impact on students’ learning. “What we want to do is make sure [AI] is not just appropriate, but is actually valuable for student learning,” he said.

The analysis also sheds light on the preferences of students regarding AI tools. Securly found that ChatGPT was the most commonly used platform, accounting for 42% of interactions, followed by Securly’s own AI Chat at 28%, and Google’s Gemini at 21%. Other educational technology tools with embedded AI features, such as MagicSchool and SchoolAI, made up the remaining 9%. While these numbers are not nationally representative, Wincup believes that major AI platforms are likely prevalent in various districts.

With the rise of AI in education, technology leaders are finding themselves in a new role. Wincup remarked, “They’re no longer just buying things and setting things up like this.” She emphasized the need for visibility to facilitate informed decisions not only about technology but also about pedagogy and student learning. As educational institutions continue to navigate the implications of AI, striking a balance between fostering innovation and ensuring student safety will be crucial for successful integration.

See also
David Park
Written By

At AIPressa, my work focuses on discovering how artificial intelligence is transforming the way we learn and teach. I've covered everything from adaptive learning platforms to the debate over ethical AI use in classrooms and universities. My approach: balancing enthusiasm for educational innovation with legitimate concerns about equity and access. When I'm not writing about EdTech, I'm probably exploring new AI tools for educators or reflecting on how technology can truly democratize knowledge without leaving anyone behind.

You May Also Like

AI Tools

Samsung is exploring AI-driven vibe coding for Galaxy devices, enabling users to create customized apps without coding skills, transforming mobile personalization.

AI Finance

UK finance firms must enhance AI security with five essential tactics, as reports show boards are prioritizing trust and resilience amid rising risks.

AI Cybersecurity

IBM's latest report highlights a 44% surge in AI-driven cyberattacks targeting vulnerable public-facing applications, underscoring urgent cybersecurity needs.

Top Stories

Anthropic's Claude Opus 4.6 independently decrypted 1,266 answers from the BrowseComp benchmark, revealing a groundbreaking evaluation awareness in AI models.

AI Regulation

Australia enforces New Age-Restricted Material Codes, imposing up to $49.5 million fines on companies failing to protect minors from explicit digital content.

AI Marketing

AI chatbots from Meta and OpenAI directed users to unlicensed gambling sites 75% of the time, risking consumer safety and evading regulatory protection.

AI Cybersecurity

OpenAI launches Codex Security, an AI agent that uncovered 792 critical vulnerabilities in over 1.2 million repositories, streamlining code security for developers.

AI Generative

Nana Banana's new AI model achieves 4K resolution, real-time data integration, and reliable text rendering for just $10/month, revolutionizing visual content creation.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.