Cybersecurity experts within the federal government have long emphasized the importance of trust in formulating effective security policies for agency systems and data. However, the landscape is rapidly evolving as cybercriminals and state-sponsored hackers increasingly leverage artificial intelligence to execute cyberattacks with heightened speed and efficiency. As a result, both governments and businesses are feeling the pressure to implement AI-powered cybersecurity defenses, alongside security architectures that empower AI agents to make key security decisions.
Jennifer Franks, Director of the Center for Enhanced Cybersecurity at the Government Accountability Office, stated that federal agencies are currently navigating the complexities of this dual approach. “We’re having to consider a two-in-one approach,” Franks said during her remarks at the Elastic Public Sector Summit, hosted by FedScoop. “It’s not something that we have to consider as a tool that’s nice to have; it’s a needed necessity right now in an environment to really look at the best practices for anticipating the adversaries that could target your environment.”
The Zero Trust security model, which has its roots in older principles like “least privilege access,” posits that defenders should treat every asset on their network as potentially compromised. This necessitates continuous verification of identity, access, and authorization to safeguard against hackers, data breaches, and insider threats. Yet, threat researchers report that malicious actors are utilizing AI-driven automation to enhance the speed of their attacks, complicating defensive responses and decision-making for human operators.
At the same summit, Mike Nichols, general manager for security solutions at Elastic, pointed out that AI tools have drastically reduced the time required to execute an attack and infiltrate an organization’s network to approximately 11 minutes. Other statistics from the past year indicate a significant decrease in the costs associated with developing custom malware—between 80% and 90%—and a 42% increase in the exploitation of zero-day vulnerabilities before they are publicly disclosed.
Nichols asserted that cybersecurity defenders must adopt AI technologies to match the rapid pace of cyberattacks, stating, “If you’re not using it, you are going to be compromised… that is a guarantee at this point.” Nonetheless, he cautioned against taking claims from “disingenuous vendors” at face value, emphasizing that no technology or process currently exists that can provide truly autonomous cybersecurity operations. Human oversight is crucial in managing critical decisions made by AI agents.
“The bottom line is these things are executing your existing processes and adding some reasoning to it,” Nichols remarked. “So… you have to have a well-oiled process and documented process.” Cybersecurity expert Chase Cunningham, known as “Dr. Zero Trust” for his advocacy of the principles, has asserted that AI agents can coexist within a Zero Trust architecture, provided they are treated as any other non-human identity in an enterprise system.
Cunningham noted that practices such as network microsegmentation, stringent account controls, and continuous logging align well with Zero Trust principles and can mitigate the potential risks posed by AI agents. “It is just another entity on the network that needs to be explicitly known, verified, constrained, monitored, and governed,” he explained. “If you do not know what model it is, what data it can access, what systems it can call, what actions it can take, and under what conditions it can do those things, then you have introduced ambiguity into the environment. And ambiguity is exactly what Zero Trust is supposed to remove.”
Despite the potential benefits of AI in cybersecurity, Nichols reinforced the necessity of human involvement whenever decisions are made on behalf of AI agents. He called upon AI vendors to promote greater transparency regarding their products, stating, “You can’t have a black box anymore. You can’t have an AI that says, ‘hey, we fixed it, I’m not going to explain why that’s the case.’ By design, you need to find a vendor that’s open API [and who can provide] explainability—the work that has to be there.”
As cyber threats continue to evolve, the integration of AI into cybersecurity practices may become not only advantageous but essential, requiring a balance of innovative technology and human oversight to safeguard critical systems and data.
See also
Anthropic’s Claims of AI-Driven Cyberattacks Raise Industry Skepticism
Anthropic Reports AI-Driven Cyberattack Linked to Chinese Espionage
Quantum Computing Threatens Current Cryptography, Experts Seek Solutions
Anthropic’s Claude AI exploited in significant cyber-espionage operation
AI Poisoning Attacks Surge 40%: Businesses Face Growing Cybersecurity Risks



















































