SadaNews — A group of experts has sounded the alarm over the evolving capabilities of artificial intelligence (AI) in the realm of cybersecurity, warning that AI models are increasingly enhancing their hacking skills. This trend suggests that the potential for AI to conduct cyber attacks autonomously may soon be “inevitable.”
Leaders from Anthropic and Google are set to testify today before two subcommittees of the House Homeland Security Committee, discussing how AI and other emerging technologies are reshaping the cyber threat landscape. Logan Graham, head of the AI testing team at Anthropic, highlighted in his opening testimony, published exclusively by Axios, that current AI advancements indicate a future where “AI models, despite strong safeguards, could enable threat actors to launch cyber attacks on an unprecedented scale.”
Graham elaborated that these potential cyber attacks may be characterized by increased complexity in both nature and scale. His comments follow a warning from OpenAI last week, which stated that upcoming AI models are likely to possess high-risk cyber capabilities that significantly reduce the skill and time necessary for executing particular types of attacks.
Research conducted by a team at Stanford University further underscores these concerns. In a recent paper, they reported that an AI program named Artemis successfully identified vulnerabilities within one of the university’s engineering department networks, outperforming 9 out of 10 human researchers who took part in the experiment. This finding illustrates the growing proficiency of AI in identifying and exploiting cybersecurity weaknesses.
Moreover, researchers at Irregular Labs, which specializes in security stress tests on leading AI models, noted that they are witnessing “increasing signs” of improvement in AI models when it comes to cyber attack tasks. These advancements encompass areas such as reverse engineering, vulnerability building, vulnerability chaining, and code analysis. Just eighteen months ago, these models were criticized for their “limited programming abilities, a lack of inferential depth, and other issues,” according to Irregular Labs. The company expressed concern about how much further these capabilities could develop in the next eighteen months.
Despite these advancements, fully AI-driven cyber attacks are still viewed as a distant prospect. Current attack methods still necessitate specialized tools, human intervention, or breaches of institutional systems. This was exemplified by a report from Anthropic last month, which revealed that Chinese government hackers had to manipulate the company’s AI cloud program into believing it was conducting a standard penetration test before successfully breaching institutions.
Today’s hearing will focus on the ways state-sponsored hackers and cyber criminals utilize AI, as well as whether policy and regulatory changes are necessary to better counteract these emerging threats. As AI technology continues to advance at a rapid pace, lawmakers are grappling with the implications for national security and the evolving nature of cyber threats.
The discussion highlights the need for proactive measures in cybersecurity policy, emphasizing the urgency of understanding how these technological advancements can be harnessed for both defensive and offensive capabilities in the digital landscape.
See also
Medellin Launches $51.8M C5 Command Hub to Enhance Emergency Response with 4,800 Cameras
NDAA Approves $8B for AI, Enforces New Cybersecurity Measures Amid Rising Risks
89% of Schools Report Cyber Incidents Amid AI Threats, 40% Feel Unprepared
AI-Driven Cyber Attacks Surge: Deepfake Phishing and Smishing Rise, Warns NCSC



















































