As emerging agentic AI models proliferate, cybersecurity experts are recognizing their potential to rapidly analyze vast amounts of data, positioning these technologies as vital assets in the ongoing battle against cybercrime. However, experts caution that the very attributes that make these systems effective can also be exploited by malicious actors, jeopardizing personal data, economic stability, and national security.
A recent discussion hosted by the Berkman Klein Center for Internet and Society convened cybersecurity experts who unanimously called for urgent regulatory action from business and government leaders to mitigate the risks associated with these advanced technologies before they escalate into more serious threats.
Data from IBM underscores the increasing urgency of this situation: a 2026 study found that cyberattacks targeting public-facing software and systems utilizing AI have surged by 44 percent year-over-year. Among notable incidents was a November breach at Anthropic, the company responsible for the Claude Code assistant, where attackers employed their own AI models to identify vulnerabilities in the system’s source code, ultimately exposing its internal mechanics.
“The unfortunate thing is that the bad people only have to win once in some sense, whereas the defenders have to win all the time,” remarked James Mickens, Gordon McKay Professor of Computer Science. He emphasized the troubling dynamics that arise when considering the implications of agentic AI on cybersecurity.
Furthermore, cybercriminals have markedly improved their techniques, especially in phishing attacks, utilizing AI to hone in on targets and craft increasingly convincing messages. “A year ago, we still had email messages in our inbox that had misspellings that were not colloquial English, that were easy to identify if you were vigilant. Now, all those signals are gone,” said Robert Knake, a panelist and partner at Paladin Capital, a cyber-venture capital group.
Knake, who previously served as the first deputy national cyber director for strategy and budget at the White House, advocates for stronger federal mandates requiring the private sector to implement more rigorous security measures to protect consumer and national safety. “We’re not at a place where we can say any error in your software that leads to a harm, you need to be responsible for. That will kill off software development,” he explained. “But we could create a safe harbor in which we say, if you’ve done … these basic things, like using the most current and known secure version of an open-source package … you should not be held liable for a bad outcome from your software. If you haven’t done them, you should be.”
However, Mickens noted that establishing such a regulatory framework may be complicated by the evolving nature of cybersecurity threats. He pointed out that for decades, major tech companies like Microsoft and Amazon have implemented internal security measures without formal government mandates. “The big difference with AI is that the threat model changes,” he explained. “Essentially, there’s some human in a chair that’s outside of the data center who’s sending evil commands to the code that’s running in the data center and otherwise trying to trick it into being evil with AI.”
The discussion highlighted the complexities of defining liabilities associated with cybersecurity and the hardware and software that would ensure compliance. Josephine Wolff, associate dean for research and professor of cybersecurity policy at the Fletcher School at Tufts University, noted the challenges in asking the private sector to proactively identify vulnerabilities across extensive networks. “Documentation and inventories are both really important and really hard,” she said, emphasizing the difficulty in keeping track of the code running on computers to pinpoint vulnerabilities.
While the liability issues surrounding data breaches remain ambiguous, panelists agreed that firms should not retaliate against hackers. The notion of allowing companies to “hack back” raises concerns about escalating conflicts. “The idea that you’re going to bring in the private sector and have that lead to anything but greater chaos seems hopelessly optimistic to me,” Wolff remarked. She also expressed skepticism about large companies like Google or Microsoft executing precise countermeasures against smaller attackers.
Mickens painted a concerning picture of a future where corporations deploy unmanned agentic firewalls capable of initiating offensive actions against perceived threats. “I think that world very quickly degenerates into essentially high-frequency trading, except now in cybersecurity, where you just have a bunch of algorithms going back and forth and reacting to each other in very real time,” he warned, expressing his belief that vigilante-like tactics could lead to more disorder rather than security.
The panelists also speculated on the future of combating AI-enhanced phishing scams, envisioning systems that could reliably verify human identities online. “We have to know with certainty who we’re dealing with, and that it is a real person if they are claiming to be a real person,” Knake stated. Yet, Mickens noted potential hurdles in implementing digital identification systems, particularly concerning privacy and anonymity. “One reason digital IDs have traditionally struggled is that there are many scenarios in which someone wants to be identified as part of their identity, but not the full identity,” he observed.
As the capabilities of AI evolve, both tech companies and government agencies face an ever-shifting landscape filled with both challenges and opportunities. “The ability to have agentic AI essentially sitting over your shoulder, on your phone, on your computer, looking at everything you’re doing and saying this certainly looks like it’s a kill chain for a fraudulent scheme, is there,” Knake concluded. “We can do this. We just need to find the right market players who will make that investment and build that technology.”
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health


















































