Connect with us

Hi, what are you looking for?

AI Education

1 in 5 Student AI Interactions Flagged for Cheating, Self-Harm, and Bullying, Securly Reports

Securly reports that 1 in 5 student interactions with AI involve cheating, self-harm, or bullying, highlighting urgent safety concerns in education.

Data from Securly, a provider of internet filtering and safety services, reveals that approximately one in five interactions between students and generative artificial intelligence (AI) involved behaviors like cheating, self-harm, and bullying. The analysis, which examined nearly 1.2 million student interactions across more than 1,300 school districts from December 1, 2025, to February 20, 2026, found that about 2% of these interactions raised red flags for potential violence or cyberbullying. The findings underscore the complexities of integrating AI tools into educational settings.

While the data highlights concerning trends, Tammy Wincup, CEO of Securly, noted that the majority of interactions were appropriate, with roughly 80% aligning with district policies on AI usage. This observation suggests that when educational institutions establish clear guidelines, students tend to adhere to them. Wincup stated, “When a district actually sets some guardrails and policies around their AI usage in schools, 80% of the conversations happening are within the district’s policies.” This positive outcome reflects the potential for AI to enhance educational experiences when used responsibly.

The Securly analysis provides a unique perspective on student interactions with AI, diverging from traditional research methods that often rely on self-reported data. As Jeremy Roschelle, co-executive director of learning science research for Digital Promise, remarked, “That’s why it’s fascinating.” In November, Securly introduced a feature that allows district officials to define parameters for AI use, similar to how they filter specific websites. This feature enables large language models to redirect inappropriate queries, helping to maintain safety and compliance with educational standards.

Notably, 95% of deflected queries stemmed from students attempting to use AI for academic assignments, a trend Wincup described as expected. She anticipates that students will experiment with the boundaries set around AI tools. A smaller percentage of flagged interactions—2%—related to gaming, with less than 1% addressing sexual content or firearms. In total, these inappropriate interactions accounted for over 24,000 queries, underscoring the importance of vigilance in monitoring AI usage. Some queries raised significant safety concerns, including one student seeking assistance from AI to compose an email detailing suicidal thoughts.

Securly’s findings indicate a higher rate of potentially unsafe AI interactions—2%—compared to 0.4% for traditional internet searches. Wincup suggested that this discrepancy may stem from Securly’s extensive experience in identifying dangerous online searches, while its work with AI is still developing. Roschelle expressed curiosity about the nature of the remaining 80% of interactions deemed appropriate and their impact on students’ learning. “What we want to do is make sure [AI] is not just appropriate, but is actually valuable for student learning,” he said.

The analysis also sheds light on the preferences of students regarding AI tools. Securly found that ChatGPT was the most commonly used platform, accounting for 42% of interactions, followed by Securly’s own AI Chat at 28%, and Google’s Gemini at 21%. Other educational technology tools with embedded AI features, such as MagicSchool and SchoolAI, made up the remaining 9%. While these numbers are not nationally representative, Wincup believes that major AI platforms are likely prevalent in various districts.

With the rise of AI in education, technology leaders are finding themselves in a new role. Wincup remarked, “They’re no longer just buying things and setting things up like this.” She emphasized the need for visibility to facilitate informed decisions not only about technology but also about pedagogy and student learning. As educational institutions continue to navigate the implications of AI, striking a balance between fostering innovation and ensuring student safety will be crucial for successful integration.

David Park
Written By

At AIPressa, my work focuses on discovering how artificial intelligence is transforming the way we learn and teach. I've covered everything from adaptive learning platforms to the debate over ethical AI use in classrooms and universities. My approach: balancing enthusiasm for educational innovation with legitimate concerns about equity and access. When I'm not writing about EdTech, I'm probably exploring new AI tools for educators or reflecting on how technology can truly democratize knowledge without leaving anyone behind.

You May Also Like

Top Stories

Yoshikazu Yasuhiko reflects on his 1989 classic Venus Wars and embraces AI's role in future animation, despite his roots in traditional hand-drawn artistry.

AI Government

Palo Alto Networks CTO Lee Klarich warns that advanced AI could uncover zero-day vulnerabilities at scale, transforming cybersecurity defenses in just six months.

AI Research

Microsoft's study reveals 41% of health inquiries to AI chatbots like Copilot seek vital information and education, reshaping digital health interactions.

AI Technology

A16z highlights how blockchain can enhance AI agent trust and accountability, potentially transforming economic interactions as Stripe's marketplace processes 34,000 transactions in its first...

AI Research

Chinese researchers unveil ASI-Evolve, an AI model that self-improves with a 0.97-point performance boost, revolutionizing scientific discovery and industry applications.

Top Stories

India's aerospace sector is set for transformative growth, driven by AI integration and global partnerships, with Airbus citing significant contributions from India's engineering hubs.

AI Generative

Google's new Gemini Personal Intelligence in Nano Banana 2 transforms AI image creation by using users' Google Photos to generate personalized images effortlessly.

AI Education

Mississippi College mandates AI courses for all first-year law students, positioning itself as a leader in legal education and ethical AI training.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.