Connect with us

Hi, what are you looking for?

AI Cybersecurity

Executives Split on AI Trust: 82% Boost Cybersecurity Budgets Amid Job Cuts, U.S. vs. U.K.

Executives anticipate a 82% boost in cybersecurity budgets but face 75% job cuts as U.S. leaders trust AI tools more than their skeptical U.K. counterparts.

A recent report by AXIS Capital indicates that executives are increasingly bracing for a surge in cyber threats driven by artificial intelligence. The findings stem from a survey involving 500 CEOs and Chief Information Security Officers (CISOs) across the United States and the United Kingdom, revealing critical disparities in perceptions regarding AI’s benefits, risks, and organizational readiness.

Among the notable findings, 82% of organizations intend to boost their cybersecurity budgets. However, 75% of respondents anticipate job cuts attributed to AI-driven efficiencies. While a significant portion of U.S. executives exhibit strong confidence in AI’s role in enhancing cybersecurity, their U.K. counterparts express considerable skepticism concerning preparedness and trust in AI tools.

This divergence in perspectives is particularly significant, as executives are making long-term strategic decisions based on their beliefs about AI’s impact on cyber defense. If reliance on AI tools proves unfounded or if organizations fail to adequately prepare for potential threats, they may face exacerbated damage during cyber incidents. The contrasting views among decision-makers could also impede crucial investments or leave vulnerabilities unaddressed.

AI-driven cyberattacks have emerged as the foremost concern among executives, with 25.2% of all respondents identifying it as their leading threat for the coming year. This concern is notably higher in the U.K., where 29.6% of respondents marked AI attacks as their primary worry, compared to 20.8% in the U.S. This disparity suggests that regional experiences and policies may significantly influence the level of concern expressed by leaders.

Trust in AI cybersecurity tools varies markedly by role and geography. In the U.S., 82.6% of CEOs expressed personal trust in AI tools to aid cybersecurity decisions, whereas only 49.6% of U.K. CEOs felt the same. For CISOs, the trust levels remain high in the U.S. at 83.0%, but plummet to 37.0% in the U.K. These findings underscore the skepticism among many U.K. leaders regarding the reliability of AI tools for critical cybersecurity functions.

The most pressing AI-related risks differ among executives. While CEOs are primarily concerned about data leakage, with 28.7% selecting it as their top risk, CISOs highlight the unauthorized use of AI tools by employees—termed “shadow AI”—as their primary issue, with 27.2% flagging it. This distinction emphasizes the differing focus areas: CEOs concentrate on external threats and business exposure, while CISOs are more attuned to internal controls and employee behavior that could undermine security efforts.

Looking ahead, nearly 82% of respondents anticipate an increase in cybersecurity spending over the next year. Simultaneously, 75.2% indicated they would likely reduce cybersecurity personnel due to expected productivity gains from AI. This trend reveals a shift towards enhanced financial investments in tools and platforms, accompanied by a reduction in the workforce managing them.

Confidence in cyber readiness significantly differs between the U.S. and U.K. In the U.S., 94.9% of CEOs believe their organizations are prepared to handle a major cyberattack, with 92.9% of CISOs agreeing. In contrast, only 70.7% of U.K. CEOs and 81.9% of CISOs report similar confidence levels. The disparity grows when assessing preparedness against AI-driven threats specifically. Only 44% of U.K. respondents claim readiness for AI-related attacks, while 84.8% of U.S. executives feel adequately prepared.

The findings of this AXIS Capital report underscore the evolving landscape of cybersecurity in the age of AI, where differences in regional perspectives and executive roles could shape strategies and investments in the coming years. As organizations navigate these challenges, the need for cohesive and informed decision-making becomes increasingly critical to mitigate risks and enhance overall security posture.

Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

AI Finance

OpenCFO secures $2M in seed funding to develop an AI-native platform, aiming to save mid-market firms over 50% on cross-border financial operations.

AI Tools

AI healthcare technology achieves 95% accuracy in disease detection, revolutionizing diagnostics and paving the way for precision medicine across multiple fields.

AI Cybersecurity

Businesses paying ransoms surged to 24.3% in 2025, with average payments hitting $296K as AI-driven attacks escalate, threatening sectors like manufacturing.

AI Technology

Fitch Ratings warns that credit risks from AI adoption could surge in tech and media sectors, with hyperscalers like Alphabet and Microsoft investing $650B...

AI Generative

NEC unveils a generative AI prototype to streamline emergency call triage in Japan, aiming for faster response times and improved public safety outcomes.

AI Government

OpenClaw surges in popularity among Chinese tech professionals, despite government warnings, as users seek innovative AI solutions to enhance productivity and workflow efficiency.

AI Tools

HKCERT warns that AI agent platforms pose greater cybersecurity risks than traditional chat-based tools, urging organizations to implement robust security measures.

AI Research

Appier introduces a groundbreaking framework for evaluating AI decision-making under risk, enhancing corporate reliability and mitigating costly inaccuracies.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.