Connect with us

Hi, what are you looking for?

AI Cybersecurity

Schools Face Rising Threat of AI-Powered Cyber Attacks Amid Federal Funding Cuts

Schools face a rising threat of AI-driven cyber attacks as federal funding cuts hinder cybersecurity measures, leaving sensitive student data vulnerable to exploitation.

(TNS) — Many school districts are significantly underprepared to combat the escalating threat of AI-powered cyber attacks, a situation exacerbated by federal budget cuts to cybersecurity programs, experts warn. As educators increasingly leverage generative AI tools for tasks like crafting emails and conducting research, cyber criminals are also exploiting these technologies to enhance their attacks, according to Don Ringelestein, executive director of technology for Yorkville 115, a school district near Chicago.

“AI is billed as something that’s going to save us time. It’s going to be an assistant for us,” he stated. “Well, that same thing applies to hackers. It’s going to make their jobs easier.”

Even prior to the surge in AI capabilities, schools were frequent targets of cyber attacks. They hold extensive sensitive data on children that can fetch a high price on the dark web and often lack the resources for robust protection. Additionally, schools manage considerable sums of money and personal data, including employees’ Social Security numbers. When breached, schools face intense public pressure to comply with cybercriminals’ demands, making them prime targets for data theft and ransomware attacks.

The introduction of AI complicates an already challenging scenario for school district technology leaders and their staff as they grapple with how to defend against cyber criminals using AI to streamline their attacks.

AI’s Role in Amplifying Cyber Attacks

Cyber criminals employ generative AI tools in various ways, notably by crafting more sophisticated phishing emails. Randy Rose, vice president of security, operations, and intelligence for the nonprofit Center for Internet Security, noted that spelling and grammatical errors, once indicators of fraudulent emails, are no longer reliable signs of a hacker’s handiwork.

“The one thing that those [large language models] are really good at is predicting the next word in a sentence,” Rose explained. “If you’re saying, create an email that says this in American English, it’ll create a really convincing one.”

AI chatbots can also emulate a specific individual’s writing style, enabling cyber criminals to deceive victims into downloading malware or divulging sensitive information. Furthermore, AI can replicate a person’s voice or image, leading to scenarios where a phone call appears to be from a district superintendent directing an urgent payment, only to be a deepfake aimed at misrouting funds, as described by Pete Just, AI project director for the Consortium for School Networking.

“I always joke that ‘deepfakes’ used to take a team of CIA agents with $1.5 million worth of equipment five months to create the deepfake,” he said. “But today, you can have high school students in five minutes on their phones, do a deepfake. That rolls over into cybersecurity.”

The capabilities of AI allow cyber criminals to gather vast amounts of information quickly, identifying targets and vulnerabilities. Unlike private businesses, school district budgets and staff emails are often publicly accessible, enhancing their attractiveness to cyber criminals.

Experts indicate that as AI technology continues to advance, so will the complexity of cyber attacks against educational institutions. Michael Klein, senior director for preparedness and response at the Institute for Security and Technology, highlighted that the evolution of AI has lowered the barrier for engaging in sophisticated cyber crime.

“Now, it’s like, type in a couple of lines and then it goes off onto the Internet and does the things for you,” Klein noted. “You have potentially one individual who is not very skilled being able to do what would have taken an entire ransomware gang to effectively do before.”

The trend of targeting schools is unlikely to abate, with experts asserting that schools remain relatively easy prey compared to hospitals and banks, primarily due to limited resources. Most cybersecurity services cater to businesses that can afford to pay more for protection.

Furthermore, personal data belonging to children can fetch a higher price on the dark web, as children typically possess clean credit histories, signaling a dangerous vulnerability in school cybersecurity.

The rise in AI-driven attacks coincides with federal budget cuts that have targeted school cybersecurity programs. Don Just expressed concern over diminished resources like the MS-ISAC, a critical cybersecurity resource hub that provided complimentary support to schools.

Under the Trump administration, funding for the MS-ISAC was cut, and the cooperative agreement was officially terminated in October. Although the Center for Internet Security continues to operate MS-ISAC, schools must now pay a membership fee to access its services.

As the federal government shifts its cybersecurity focus, experts stress the necessity for continued support of school cybersecurity initiatives. Klein emphasized the critical role the federal government plays in understanding and mitigating cyber threats.

“The federal government has visibility into threats that state and local governments, and certainly K-12 institutions, don’t have,” he remarked. “The U.S. intelligence community is doing great research to understand what adversaries are trying to do and what mechanisms will be most helpful to stop those threats.”

To bolster defenses against AI-enhanced attacks, experts suggest several strategies for schools. Ringelestein noted the importance of collaboration among local districts for sharing best practices. Additionally, tabletop exercises can help district leaders simulate responses to cyber threats, building preparedness at low cost.

Investing in cybersecurity training for employees is crucial. Just highlighted the need for educational programs, particularly regarding phishing attacks, to enhance staff awareness and vigilance. Schools should implement procedures for verifying identities during phone or video calls to prevent exploitation.

Experts recommend employing software that sends fake phishing emails to staff, providing an opportunity for training on recognizing such threats. Overall, fortifying cybersecurity remains achievable, even as AI technologies evolve, by adhering to basic security measures like multi-factor authentication and regular software updates.

“When everything’s falling apart, you go back to the basics: blocking and tackling, so to speak,” Just concluded. “This is the blocking and tackling of cybersecurity.”

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

AI Regulation

Bellagent launches its AI-powered platform, enabling SMBs to cut manual effort by over 60% in less than two weeks through rapid, no-code integrations.

AI Regulation

Cynta introduces a CFA-supervised AI trading system boasting a 90% trade success rate, emphasizing capital preservation and accountability for retail investors.

AI Marketing

Chowbus unveils its AI Digital Marketing tool for restaurants, offering up to $600 in ad credits to streamline advertising and review management across platforms.

AI Business

SPLICE Software reports 2025 growth driven by AI advancements and partnerships, launching Talk+ to unify voice and text for enhanced customer engagement.

Top Stories

Thoughtworks launches AI/works™, a transformative platform reducing legacy modernization cycles from years to months, enhancing enterprise AI integration and efficiency.

AI Education

GSV Cup selects 50 innovative EdTech startups from 3,000 global nominations, raising over $177 million and highlighting diverse leadership with 66% underrepresented founders.

AI Regulation

SAI360 acquires AI-driven regulatory intelligence firm Plural Policy to enhance compliance solutions and better navigate complex global regulations.

AI Tools

Fovia AI unveils its F.A.S.T.® aiCockpit® universal AI viewer at RSNA 2025, showcasing interoperability across 18 products for enhanced radiology workflows.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.