Connect with us

Hi, what are you looking for?

AI Cybersecurity

50% of Security Leaders Unprepared for Escalating AI-Powered Cyber Threats, EY Reports

EY reveals 50% of security leaders feel unprepared for AI-driven cyber threats, with 85% citing inadequate funding to combat these escalating risks.

AI-driven cyberattacks are outpacing corporate defenses, with a striking 50% of security leaders admitting they are unprepared for this evolving threat. A survey conducted by EY, which included over 500 senior cybersecurity officials, highlights a significant disconnect: while 96% recognize AI-enabled attacks as a serious risk, only 46% express strong confidence in their organization’s ability to withstand such threats. The survey reveals that many security teams remain in pilot mode, even as attackers increasingly employ AI technologies on a large scale.

Budget constraints and governance challenges are notable pressure points for organizations. According to EY, 85% of cybersecurity leaders believe their current funding is inadequate to address the risks associated with the AI era. Despite this, a staggering 97% of respondents assert that a structured framework for secure AI utilization is crucial for realizing return on investment (ROI); however, only 20% report having such a framework fully implemented. There is a shift in investment trends, with the percentage of organizations allocating at least a quarter of their security budget to AI-native solutions projected to rise dramatically from 9% to 48% over the next two years.

The slow readiness in enterprise AI security can be attributed to various factors. While organizations are eager to harness AI’s speed and scalability, many initiatives falter in execution. An analysis from MIT found that 95% of enterprise AI projects fail to yield significant ROI, indicating that pilot efforts do not always lead to successful implementations. Additionally, a global survey revealed that although 87% of business leaders anticipate AI will transform their operations, only 29% believe their teams possess the necessary training to adapt.

Security leaders are also grappling with architectural debt as many security operations centers (SOCs) were not designed to manage critical interactions with AI models. This limitation affects their ability to investigate issues related to prompt injection, data poisoning, and model misuse. Without defined ownership and measurable outcomes, AI security efforts often remain sidelined rather than integrated into broader security programs.

AI is enhancing cyber threats and the tactics employed by attackers. Adversaries are already leveraging generative models to execute large-scale spear-phishing campaigns, automate reconnaissance, and create polymorphic malware that evolves faster than traditional signature-based defenses can respond. OpenAI and other industry threat analyses have documented how AI facilitates criminal operations, reducing barriers in terms of skills and costs.

The rise of deepfakes is another concerning development, recently exemplified by a multimillion-dollar fraud case involving Hong Kong Police, where deepfaked executives manipulated an employee into authorizing illicit fund transfers. This incident underscores the necessity for evolving verification controls beyond conventional malware defenses. Reports such as the Verizon Data Breach Investigations Report and IBM’s Cost of a Data Breach research highlight that social engineering and credential theft continue to be prevalent entry points for cyber threats, reinforcing the urgency of rapid AI-driven detection and response mechanisms.

What Comes Next

To counter the escalating threats posed by AI, organizations are urged to take immediate action. First, they should develop an AI threat playbook and rigorously test it through red teaming. This includes creating response plans for various risks such as prompt injection and data exfiltration, alongside ensuring robust logging for effective investigations. Establishing an AI security governance framework is also essential, encompassing an inventory of models and data sources, and aligning with established standards like the NIST AI Risk Management Framework.

Organizations should also prioritize AI-native defenses that significantly enhance security, focusing on email and identity protections, as well as endpoint detection and response mechanisms. Furthermore, they need to tighten data and access controls for all AI systems, implementing Zero Trust principles and ensuring suppliers provide necessary security attestations.

High-maturity programs that effectively integrate AI into their security strategies treat AI as both a new vulnerability and a defensive tool. These programs incorporate model telemetry into their security information and event management systems, conduct continuous adversarial testing, and emphasize the importance of secure practices in machine learning development pipelines. EY’s findings illustrate a clear imperative: organizations that delay addressing AI security do so at their own peril. Those that act now to establish governance frameworks, strengthen defenses, and invest in measurable capabilities will gain a crucial advantage in the ongoing battle against cyber threats.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

AI Business

Red Hat advances enterprise AI with Small Language Models that achieve over 98% validity in structured tasks, prioritizing reliability and data sovereignty.

AI Research

OpenAI's o1 model achieves 81.6% diagnostic accuracy in emergency situations, surpassing human doctors and signaling a major shift in medical practice.

AI Regulation

Korea Venture Investment Corp. unveils AI-driven fund management systems by integrating Nvidia H200 GPUs to enhance efficiency and support unicorn growth.

AI Technology

Apple raises Mac mini starting price to $799 amid AI-driven inventory shortages, eliminating the $599 model in response to surging demand for advanced computing.

AI Research

IBM launches a Chicago Quantum Hub to create 750 AI jobs and expands its MIT partnership to advance quantum computing and AI integration.

AI Government

71% of Australian employees use generative AI daily, but only 36% trust its implementation, highlighting urgent calls for better policy frameworks and safeguards.

AI Regulation

The Academy of Motion Picture Arts and Sciences bars AI performances from Oscar eligibility, emphasizing human-authored content amid rising industry tensions over generative AI's...

AI Tools

Workday's stock jumps 3.73% to $126.96 amid AI product updates and earnings optimism, yet analysts cite a 49.8% undervaluation risk at $253.14.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.