Connect with us

Hi, what are you looking for?

AI Cybersecurity

50% of Security Leaders Unprepared for Escalating AI-Powered Cyber Threats, EY Reports

EY reveals 50% of security leaders feel unprepared for AI-driven cyber threats, with 85% citing inadequate funding to combat these escalating risks.

AI-driven cyberattacks are outpacing corporate defenses, with a striking 50% of security leaders admitting they are unprepared for this evolving threat. A survey conducted by EY, which included over 500 senior cybersecurity officials, highlights a significant disconnect: while 96% recognize AI-enabled attacks as a serious risk, only 46% express strong confidence in their organization’s ability to withstand such threats. The survey reveals that many security teams remain in pilot mode, even as attackers increasingly employ AI technologies on a large scale.

Budget constraints and governance challenges are notable pressure points for organizations. According to EY, 85% of cybersecurity leaders believe their current funding is inadequate to address the risks associated with the AI era. Despite this, a staggering 97% of respondents assert that a structured framework for secure AI utilization is crucial for realizing return on investment (ROI); however, only 20% report having such a framework fully implemented. There is a shift in investment trends, with the percentage of organizations allocating at least a quarter of their security budget to AI-native solutions projected to rise dramatically from 9% to 48% over the next two years.

The slow readiness in enterprise AI security can be attributed to various factors. While organizations are eager to harness AI’s speed and scalability, many initiatives falter in execution. An analysis from MIT found that 95% of enterprise AI projects fail to yield significant ROI, indicating that pilot efforts do not always lead to successful implementations. Additionally, a global survey revealed that although 87% of business leaders anticipate AI will transform their operations, only 29% believe their teams possess the necessary training to adapt.

Security leaders are also grappling with architectural debt as many security operations centers (SOCs) were not designed to manage critical interactions with AI models. This limitation affects their ability to investigate issues related to prompt injection, data poisoning, and model misuse. Without defined ownership and measurable outcomes, AI security efforts often remain sidelined rather than integrated into broader security programs.

AI is enhancing cyber threats and the tactics employed by attackers. Adversaries are already leveraging generative models to execute large-scale spear-phishing campaigns, automate reconnaissance, and create polymorphic malware that evolves faster than traditional signature-based defenses can respond. OpenAI and other industry threat analyses have documented how AI facilitates criminal operations, reducing barriers in terms of skills and costs.

The rise of deepfakes is another concerning development, recently exemplified by a multimillion-dollar fraud case involving Hong Kong Police, where deepfaked executives manipulated an employee into authorizing illicit fund transfers. This incident underscores the necessity for evolving verification controls beyond conventional malware defenses. Reports such as the Verizon Data Breach Investigations Report and IBM’s Cost of a Data Breach research highlight that social engineering and credential theft continue to be prevalent entry points for cyber threats, reinforcing the urgency of rapid AI-driven detection and response mechanisms.

What Comes Next

To counter the escalating threats posed by AI, organizations are urged to take immediate action. First, they should develop an AI threat playbook and rigorously test it through red teaming. This includes creating response plans for various risks such as prompt injection and data exfiltration, alongside ensuring robust logging for effective investigations. Establishing an AI security governance framework is also essential, encompassing an inventory of models and data sources, and aligning with established standards like the NIST AI Risk Management Framework.

Organizations should also prioritize AI-native defenses that significantly enhance security, focusing on email and identity protections, as well as endpoint detection and response mechanisms. Furthermore, they need to tighten data and access controls for all AI systems, implementing Zero Trust principles and ensuring suppliers provide necessary security attestations.

High-maturity programs that effectively integrate AI into their security strategies treat AI as both a new vulnerability and a defensive tool. These programs incorporate model telemetry into their security information and event management systems, conduct continuous adversarial testing, and emphasize the importance of secure practices in machine learning development pipelines. EY’s findings illustrate a clear imperative: organizations that delay addressing AI security do so at their own peril. Those that act now to establish governance frameworks, strengthen defenses, and invest in measurable capabilities will gain a crucial advantage in the ongoing battle against cyber threats.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

Top Stories

Skylark Labs unveils a 24/7 Fixed FOD detection system at airports, enhancing runway safety and eliminating costly operational downtimes through autonomous monitoring.

AI Finance

CFOs report 83% anticipate AI investment increases by 2026, yet only 33% achieve successful large-scale deployments, raising ROI concerns.

AI Education

Mediazoo launches Finer Vision to combat the 96% AI skills gap in the UK, offering training that can reduce course development time by up...

AI Generative

AI tools like the Relumi App enhance old photos into dynamic videos, achieving user ratings of 4.8/5 and revolutionizing personal storytelling through animation.

AI Research

Oomiji's report forecasts a dramatic shift in marketing, projecting that 45% of agency roles may vanish by 2030 as AI-driven services reach $220 billion.

AI Business

Enterprises face rising coordination challenges as AI agents proliferate across systems, with Salesforce's Agentforce Health AI automating critical healthcare workflows.

AI Generative

Capcom commits to excluding generative AI from final game products while planning to utilize AI in development, addressing growing consumer skepticism.

AI Technology

Healthcare leaders at HIMSS26 shift focus to operational AI value, with Epic introducing Agent Factory to enhance EHR integration amid pressing governance challenges.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.