(TNS) — Many school districts are significantly underprepared to combat the escalating threat of AI-powered cyber attacks, a situation exacerbated by federal budget cuts to cybersecurity programs, experts warn. As educators increasingly leverage generative AI tools for tasks like crafting emails and conducting research, cyber criminals are also exploiting these technologies to enhance their attacks, according to Don Ringelestein, executive director of technology for Yorkville 115, a school district near Chicago.
“AI is billed as something that’s going to save us time. It’s going to be an assistant for us,” he stated. “Well, that same thing applies to hackers. It’s going to make their jobs easier.”
Even prior to the surge in AI capabilities, schools were frequent targets of cyber attacks. They hold extensive sensitive data on children that can fetch a high price on the dark web and often lack the resources for robust protection. Additionally, schools manage considerable sums of money and personal data, including employees’ Social Security numbers. When breached, schools face intense public pressure to comply with cybercriminals’ demands, making them prime targets for data theft and ransomware attacks.
The introduction of AI complicates an already challenging scenario for school district technology leaders and their staff as they grapple with how to defend against cyber criminals using AI to streamline their attacks.
AI’s Role in Amplifying Cyber Attacks
Cyber criminals employ generative AI tools in various ways, notably by crafting more sophisticated phishing emails. Randy Rose, vice president of security, operations, and intelligence for the nonprofit Center for Internet Security, noted that spelling and grammatical errors, once indicators of fraudulent emails, are no longer reliable signs of a hacker’s handiwork.
“The one thing that those [large language models] are really good at is predicting the next word in a sentence,” Rose explained. “If you’re saying, create an email that says this in American English, it’ll create a really convincing one.”
AI chatbots can also emulate a specific individual’s writing style, enabling cyber criminals to deceive victims into downloading malware or divulging sensitive information. Furthermore, AI can replicate a person’s voice or image, leading to scenarios where a phone call appears to be from a district superintendent directing an urgent payment, only to be a deepfake aimed at misrouting funds, as described by Pete Just, AI project director for the Consortium for School Networking.
“I always joke that ‘deepfakes’ used to take a team of CIA agents with $1.5 million worth of equipment five months to create the deepfake,” he said. “But today, you can have high school students in five minutes on their phones, do a deepfake. That rolls over into cybersecurity.”
The capabilities of AI allow cyber criminals to gather vast amounts of information quickly, identifying targets and vulnerabilities. Unlike private businesses, school district budgets and staff emails are often publicly accessible, enhancing their attractiveness to cyber criminals.
Experts indicate that as AI technology continues to advance, so will the complexity of cyber attacks against educational institutions. Michael Klein, senior director for preparedness and response at the Institute for Security and Technology, highlighted that the evolution of AI has lowered the barrier for engaging in sophisticated cyber crime.
“Now, it’s like, type in a couple of lines and then it goes off onto the Internet and does the things for you,” Klein noted. “You have potentially one individual who is not very skilled being able to do what would have taken an entire ransomware gang to effectively do before.”
The trend of targeting schools is unlikely to abate, with experts asserting that schools remain relatively easy prey compared to hospitals and banks, primarily due to limited resources. Most cybersecurity services cater to businesses that can afford to pay more for protection.
Furthermore, personal data belonging to children can fetch a higher price on the dark web, as children typically possess clean credit histories, signaling a dangerous vulnerability in school cybersecurity.
The rise in AI-driven attacks coincides with federal budget cuts that have targeted school cybersecurity programs. Don Just expressed concern over diminished resources like the MS-ISAC, a critical cybersecurity resource hub that provided complimentary support to schools.
Under the Trump administration, funding for the MS-ISAC was cut, and the cooperative agreement was officially terminated in October. Although the Center for Internet Security continues to operate MS-ISAC, schools must now pay a membership fee to access its services.
As the federal government shifts its cybersecurity focus, experts stress the necessity for continued support of school cybersecurity initiatives. Klein emphasized the critical role the federal government plays in understanding and mitigating cyber threats.
“The federal government has visibility into threats that state and local governments, and certainly K-12 institutions, don’t have,” he remarked. “The U.S. intelligence community is doing great research to understand what adversaries are trying to do and what mechanisms will be most helpful to stop those threats.”
To bolster defenses against AI-enhanced attacks, experts suggest several strategies for schools. Ringelestein noted the importance of collaboration among local districts for sharing best practices. Additionally, tabletop exercises can help district leaders simulate responses to cyber threats, building preparedness at low cost.
Investing in cybersecurity training for employees is crucial. Just highlighted the need for educational programs, particularly regarding phishing attacks, to enhance staff awareness and vigilance. Schools should implement procedures for verifying identities during phone or video calls to prevent exploitation.
Experts recommend employing software that sends fake phishing emails to staff, providing an opportunity for training on recognizing such threats. Overall, fortifying cybersecurity remains achievable, even as AI technologies evolve, by adhering to basic security measures like multi-factor authentication and regular software updates.
“When everything’s falling apart, you go back to the basics: blocking and tackling, so to speak,” Just concluded. “This is the blocking and tackling of cybersecurity.”
See also
Anthropic’s Claims of AI-Driven Cyberattacks Raise Industry Skepticism
Anthropic Reports AI-Driven Cyberattack Linked to Chinese Espionage
Quantum Computing Threatens Current Cryptography, Experts Seek Solutions
Anthropic’s Claude AI exploited in significant cyber-espionage operation
AI Poisoning Attacks Surge 40%: Businesses Face Growing Cybersecurity Risks















































