A joint hearing of the U.S. House Committee on Homeland Security was convened on December 17 to address escalating cybersecurity threats tied to advancements in artificial intelligence (AI) and quantum computing. The Subcommittee on Oversight, Investigations, and Accountability, in collaboration with the Subcommittee on Cybersecurity and Infrastructure Protection, sought insights from technology leaders and cybersecurity experts regarding effective legislative strategies to fortify the nation’s digital infrastructure against sophisticated cyberattacks.
Despite the absence of definitive proposals for legislation, lawmakers utilized the hearing to underscore the increasing complexity and potency of AI and quantum-enabled cyber threats. They highlighted that the frequency and intensity of these attacks are on the rise, suggesting that the landscape of cyber warfare is becoming increasingly challenging to navigate.
“The rapid development of emerging technologies, including advanced AI and quantum computing, enables and enhances security risk,” stated Oversight, Investigations and Accountability Ranking Member Shri Thanedar, D-Mich. He warned that these advanced technologies not only bolster the cyber capabilities of sophisticated nations like China, but they also empower less-resourced countries and organized crime groups, leading to faster, more widespread, and more difficult-to-detect cyberattacks.
The timing of the hearing coincided with reports from leading AI developers about the potential misuse of their technologies to enhance cybercriminal activities. Both Anthropic and OpenAI have recently reported how their advanced models could be exploited to complicate cyberattacks significantly.
Thanedar noted that organized crime syndicates and state-backed threat actors, including those from China, North Korea, and Russia, have spent years sharpening the sophistication of cyberattacks aimed at espionage, intellectual property theft, and ransomware demands. He urged Congress to extend the Cyber Security Information Sharing Act, which provides liability protection for companies reporting cyber incidents to the government, before its expiration at the end of January.
Cybersecurity and Infrastructure Protection Chair U.S. Rep. Andy Ogles, R-Tenn., emphasized the necessity of formulating bipartisan solutions to address these evolving threats. He suggested the establishment of a new bipartisan working group to facilitate the development of actionable proposals.
“If we don’t get this right, we’re screwed, and if we mess this up it changes everything forever,” Ogles remarked. He stressed that this issue transcends political affiliations, framing it as a matter of national security. “I truly can’t imagine what the future looks like, but it’s coming whether we prepare for it or not.”
The discussion also spotlighted a report from Anthropic, which revealed that Chinese hackers had employed their AI model, Claude, to autonomously target about 30 global organizations. The attackers manipulated Claude into believing it was conducting legitimate cybersecurity tasks, revealing the potential for AI to automate a significant portion of the human actions needed for effective cyberattacks.
Logan Graham, head of Anthropic’s Frontier Red Team, informed the subcommittees that while Claude’s internal code remained uncompromised, the incident highlighted the capability of cybercriminals to automate 80-90% of necessary tasks for a cyberattack. “This is a significant increase in the speed and scale of operations compared to traditional methods,” Graham noted.
The hearing further explored the implications of AI’s automation in cybersecurity. Rep. Morgan Luttrell, R-Texas, raised concerns about the risks associated with AI systems eliminating the human oversight necessary for identifying cyber threats. “What happens if we move to a point where artificial intelligence removes the human element?” Luttrell questioned.
Graham responded that while the attack did activate automated detection measures, the hackers utilized an obfuscation network that concealed their origin, complicating detection efforts. He advocated for measures enabling rapid testing of AI models for national security applications and enhancing threat intelligence sharing between model developers and government agencies.
Royal Hansen, Vice President for Privacy, Safety, and Security Engineering at Google, pointed out a noticeable shift in how malicious actors are employing AI for both productivity gains and novel malware deployment. He underscored the necessity for cybersecurity professionals to adopt advanced AI tools that can automate existing vulnerabilities while defending against automated attacks.
Lawmakers also directed their attention to the implications of quantum computing on cybersecurity. Eddy Zervigon, CEO of Quantum Xchange, urged the adoption of a proactive “architectural approach” to safeguard against quantum-enabled threats, advocating for the reinforcement of secure networks through post-quantum cryptography.
Michael Coates, Founding Partner of Seven Hill Ventures, outlined five critical areas for congressional action to bolster cyber resilience against future AI and quantum-enabled attacks. These included adopting secure-by-design principles in software development and mandating transparency in AI development.
“Intelligent automation allows attacks to become continuous rather than episodic, eroding assumptions that organizations can recover between incidents,” Coates warned. As AI and quantum computing evolve, the ability to adapt technical and institutional responses will be central to the nation’s cybersecurity strategy moving forward.
See also
MSPs and MSSPs Harness AI to Enhance Security and Streamline Operations
Top 10 API Security Testing Tools for 2026: Enhance Your Protection Now
Okta Upgraded to Buy at Jefferies; Palo Alto Reports 99% AI Attack Rate
AI Experts Warn of Autonomous Cyber Attack Risks Ahead of Congressional Hearing
Medellin Launches $51.8M C5 Command Hub to Enhance Emergency Response with 4,800 Cameras



















































