Connect with us

Hi, what are you looking for?

AI Cybersecurity

US House Subcommittees Address AI and Quantum Computing’s Cybersecurity Risks

U.S. House subcommittees warn that AI advancements enable 80-90% automation of cyberattacks, urging immediate bipartisan action to bolster cybersecurity.

A joint hearing of the U.S. House Committee on Homeland Security was convened on December 17 to address escalating cybersecurity threats tied to advancements in artificial intelligence (AI) and quantum computing. The Subcommittee on Oversight, Investigations, and Accountability, in collaboration with the Subcommittee on Cybersecurity and Infrastructure Protection, sought insights from technology leaders and cybersecurity experts regarding effective legislative strategies to fortify the nation’s digital infrastructure against sophisticated cyberattacks.

Despite the absence of definitive proposals for legislation, lawmakers utilized the hearing to underscore the increasing complexity and potency of AI and quantum-enabled cyber threats. They highlighted that the frequency and intensity of these attacks are on the rise, suggesting that the landscape of cyber warfare is becoming increasingly challenging to navigate.

“The rapid development of emerging technologies, including advanced AI and quantum computing, enables and enhances security risk,” stated Oversight, Investigations and Accountability Ranking Member Shri Thanedar, D-Mich. He warned that these advanced technologies not only bolster the cyber capabilities of sophisticated nations like China, but they also empower less-resourced countries and organized crime groups, leading to faster, more widespread, and more difficult-to-detect cyberattacks.

The timing of the hearing coincided with reports from leading AI developers about the potential misuse of their technologies to enhance cybercriminal activities. Both Anthropic and OpenAI have recently reported how their advanced models could be exploited to complicate cyberattacks significantly.

Thanedar noted that organized crime syndicates and state-backed threat actors, including those from China, North Korea, and Russia, have spent years sharpening the sophistication of cyberattacks aimed at espionage, intellectual property theft, and ransomware demands. He urged Congress to extend the Cyber Security Information Sharing Act, which provides liability protection for companies reporting cyber incidents to the government, before its expiration at the end of January.

Cybersecurity and Infrastructure Protection Chair U.S. Rep. Andy Ogles, R-Tenn., emphasized the necessity of formulating bipartisan solutions to address these evolving threats. He suggested the establishment of a new bipartisan working group to facilitate the development of actionable proposals.

“If we don’t get this right, we’re screwed, and if we mess this up it changes everything forever,” Ogles remarked. He stressed that this issue transcends political affiliations, framing it as a matter of national security. “I truly can’t imagine what the future looks like, but it’s coming whether we prepare for it or not.”

The discussion also spotlighted a report from Anthropic, which revealed that Chinese hackers had employed their AI model, Claude, to autonomously target about 30 global organizations. The attackers manipulated Claude into believing it was conducting legitimate cybersecurity tasks, revealing the potential for AI to automate a significant portion of the human actions needed for effective cyberattacks.

Logan Graham, head of Anthropic’s Frontier Red Team, informed the subcommittees that while Claude’s internal code remained uncompromised, the incident highlighted the capability of cybercriminals to automate 80-90% of necessary tasks for a cyberattack. “This is a significant increase in the speed and scale of operations compared to traditional methods,” Graham noted.

The hearing further explored the implications of AI’s automation in cybersecurity. Rep. Morgan Luttrell, R-Texas, raised concerns about the risks associated with AI systems eliminating the human oversight necessary for identifying cyber threats. “What happens if we move to a point where artificial intelligence removes the human element?” Luttrell questioned.

Graham responded that while the attack did activate automated detection measures, the hackers utilized an obfuscation network that concealed their origin, complicating detection efforts. He advocated for measures enabling rapid testing of AI models for national security applications and enhancing threat intelligence sharing between model developers and government agencies.

Royal Hansen, Vice President for Privacy, Safety, and Security Engineering at Google, pointed out a noticeable shift in how malicious actors are employing AI for both productivity gains and novel malware deployment. He underscored the necessity for cybersecurity professionals to adopt advanced AI tools that can automate existing vulnerabilities while defending against automated attacks.

Lawmakers also directed their attention to the implications of quantum computing on cybersecurity. Eddy Zervigon, CEO of Quantum Xchange, urged the adoption of a proactive “architectural approach” to safeguard against quantum-enabled threats, advocating for the reinforcement of secure networks through post-quantum cryptography.

Michael Coates, Founding Partner of Seven Hill Ventures, outlined five critical areas for congressional action to bolster cyber resilience against future AI and quantum-enabled attacks. These included adopting secure-by-design principles in software development and mandating transparency in AI development.

“Intelligent automation allows attacks to become continuous rather than episodic, eroding assumptions that organizations can recover between incidents,” Coates warned. As AI and quantum computing evolve, the ability to adapt technical and institutional responses will be central to the nation’s cybersecurity strategy moving forward.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

Top Stories

Analysts warn that unchecked AI enthusiasm from companies like OpenAI and Nvidia could mask looming market instability as geopolitical tensions escalate and regulations lag.

AI Business

The global software development market is projected to surge from $532.65 billion in 2024 to $1.46 trillion by 2033, driven by AI and cloud...

AI Technology

AI is transforming accounting by 2026, with firms like BDO leveraging intelligent systems to enhance client relationships and drive predictable revenue streams.

AI Generative

Instagram CEO Adam Mosseri warns that the surge in AI-generated content threatens authenticity, compelling users to adopt skepticism as trust erodes.

AI Tools

Over 60% of U.S. consumers now rely on AI platforms for primary digital interactions, signaling a major shift in online commerce and user engagement.

AI Government

India's AI workforce is set to double to over 1.25 million by 2027, but questions linger about workers' readiness and job security in this...

AI Education

EDCAPIT secures $5M in Seed funding, achieving 120K page views and expanding its educational platform to over 30 countries in just one year.

Top Stories

Health care braces for a payment overhaul as only 3 out of 1,357 AI medical devices secure CPT codes amid rising pressure for reimbursement...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.