Connect with us

Hi, what are you looking for?

AI Cybersecurity

Zero Trust Faces Challenge as AI-Driven Attacks Cut Response Time to 11 Minutes

AI-driven attacks have slashed response times to 11 minutes, prompting cybersecurity leaders to emphasize the urgent need for robust AI defenses and human oversight.

Cybersecurity experts within the federal government have long emphasized the importance of trust in formulating effective security policies for agency systems and data. However, the landscape is rapidly evolving as cybercriminals and state-sponsored hackers increasingly leverage artificial intelligence to execute cyberattacks with heightened speed and efficiency. As a result, both governments and businesses are feeling the pressure to implement AI-powered cybersecurity defenses, alongside security architectures that empower AI agents to make key security decisions.

Jennifer Franks, Director of the Center for Enhanced Cybersecurity at the Government Accountability Office, stated that federal agencies are currently navigating the complexities of this dual approach. “We’re having to consider a two-in-one approach,” Franks said during her remarks at the Elastic Public Sector Summit, hosted by FedScoop. “It’s not something that we have to consider as a tool that’s nice to have; it’s a needed necessity right now in an environment to really look at the best practices for anticipating the adversaries that could target your environment.”

The Zero Trust security model, which has its roots in older principles like “least privilege access,” posits that defenders should treat every asset on their network as potentially compromised. This necessitates continuous verification of identity, access, and authorization to safeguard against hackers, data breaches, and insider threats. Yet, threat researchers report that malicious actors are utilizing AI-driven automation to enhance the speed of their attacks, complicating defensive responses and decision-making for human operators.

At the same summit, Mike Nichols, general manager for security solutions at Elastic, pointed out that AI tools have drastically reduced the time required to execute an attack and infiltrate an organization’s network to approximately 11 minutes. Other statistics from the past year indicate a significant decrease in the costs associated with developing custom malware—between 80% and 90%—and a 42% increase in the exploitation of zero-day vulnerabilities before they are publicly disclosed.

Nichols asserted that cybersecurity defenders must adopt AI technologies to match the rapid pace of cyberattacks, stating, “If you’re not using it, you are going to be compromised… that is a guarantee at this point.” Nonetheless, he cautioned against taking claims from “disingenuous vendors” at face value, emphasizing that no technology or process currently exists that can provide truly autonomous cybersecurity operations. Human oversight is crucial in managing critical decisions made by AI agents.

“The bottom line is these things are executing your existing processes and adding some reasoning to it,” Nichols remarked. “So… you have to have a well-oiled process and documented process.” Cybersecurity expert Chase Cunningham, known as “Dr. Zero Trust” for his advocacy of the principles, has asserted that AI agents can coexist within a Zero Trust architecture, provided they are treated as any other non-human identity in an enterprise system.

Cunningham noted that practices such as network microsegmentation, stringent account controls, and continuous logging align well with Zero Trust principles and can mitigate the potential risks posed by AI agents. “It is just another entity on the network that needs to be explicitly known, verified, constrained, monitored, and governed,” he explained. “If you do not know what model it is, what data it can access, what systems it can call, what actions it can take, and under what conditions it can do those things, then you have introduced ambiguity into the environment. And ambiguity is exactly what Zero Trust is supposed to remove.”

Despite the potential benefits of AI in cybersecurity, Nichols reinforced the necessity of human involvement whenever decisions are made on behalf of AI agents. He called upon AI vendors to promote greater transparency regarding their products, stating, “You can’t have a black box anymore. You can’t have an AI that says, ‘hey, we fixed it, I’m not going to explain why that’s the case.’ By design, you need to find a vendor that’s open API [and who can provide] explainability—the work that has to be there.”

As cyber threats continue to evolve, the integration of AI into cybersecurity practices may become not only advantageous but essential, requiring a balance of innovative technology and human oversight to safeguard critical systems and data.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

Top Stories

OPM halts use of Anthropic's Claude amid safety concerns, replacing it with Grok and Codex, expected to debut in Q1 2026.

AI Government

Federal agencies catalog over 2,500 AI applications, marking a shift to operational systems that enhance efficiency and service delivery.

AI Government

OpenAI partners with Leidos to deploy generative AI for federal missions, enhancing automation and decision-making at a nominal cost of $1 per agency.

Top Stories

Contractors increasingly file bid protests using AI-generated arguments, leading to GAO dismissals due to fabricated citations, raising legal accountability concerns.

Top Stories

GAO warns that AI tools in real estate could foster discrimination and inflate housing costs, urging federal guidelines for fair practices in home buying...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.