Connect with us

Hi, what are you looking for?

AI Cybersecurity

27% of IT Leaders Fear Deepfake Attacks Amid AI Governance Gaps in Ireland and UK

27% of IT leaders in Ireland and the UK are alarmed by deepfake threats, highlighting significant governance gaps amid rapid AI adoption, according to Storm Technology.

More than a quarter of IT leaders in Ireland and the UK are concerned about the challenges posed by deepfake technology, with 27% expressing fears over their ability to detect such attacks in the coming year. This statistic emerges from a survey conducted by Storm Technology, now part of Littlefish, which involved 200 IT decision-makers and underscores the growing apprehensions surrounding the security implications of rapid AI adoption.

The survey results indicate that the anxiety surrounding deepfake detection is particularly heightened among larger enterprises, where 33% of respondents reported concerns, compared to 23% in smaller businesses. Data breaches were identified as the most pressing issue, cited by 34% of IT leaders, followed closely by concerns over data protection (33%) and the risks associated with adversarial cyber-attacks (31%).

In addition to deepfake threats, the prevalence of shadow AI—defined as the use of unsanctioned or unapproved tools—has emerged as a significant concern among IT leaders. One in four respondents listed this as a top worry, while half acknowledged that employees within their organizations are using such tools. Notably, 55% of respondents admitted to utilizing unsanctioned AI platforms themselves, and 42% expressed doubts about the safety of their company data when inputting it into these applications. Only 60% of organizations have established clear guidelines regarding which AI tools are permitted for use.

The survey also highlighted substantial governance gaps. Almost one-third of companies lack a strategy to manage risks associated with AI, and 21% of IT leaders do not have a high degree of trust in AI tools. Among Irish respondents, the concern is even more pronounced, with 35% believing their governance measures are inadequate, compared to 28% overall. Approximately four in five participants agreed that their organizations need to enhance regulations governing AI tools.

Data readiness poses an additional challenge, with a quarter of IT leaders indicating that their business data is not adequately prepared for AI applications. Furthermore, 23% reported that their data governance policies are insufficient to support secure AI adoption. As a result, 78% believe that a dedicated project focused on data readiness is essential.

Sean Tickle, Cyber Services Director at Littlefish, emphasized the urgency of addressing these issues, stating, “AI is rapidly reshaping the enterprise landscape, but the speed of adoption is outpacing the maturity of governance. When nearly a third of organizations lack a strategy to manage AI risk, and over half of IT leaders admit to using unsanctioned tools, it’s clear that shadow AI isn’t just a user issue – it’s a leadership one.”

Tickle further noted, “Deepfake threats, data governance gaps, and a lack of trust in AI platforms are converging into a perfect storm. To stay secure and competitive, businesses must invest in visibility, policy clarity, and data readiness – because without those, AI becomes a liability, not a differentiator.”

The findings from this survey reflect a broader trend in the evolving landscape of technology, where the rapid adoption of AI must be matched with adequate governance and security measures. As organizations strive to harness the benefits of artificial intelligence, addressing these challenges will be crucial to mitigating risks and ensuring sustainable growth in the digital age.

Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

AI Regulation

UK's 2025 Budget introduces a three-year Stamp Duty Reserve Tax exemption and increases EMI limits, aiming to enhance investment and scale start-ups amidst ongoing...

AI Research

Aston University’s new AI research platform, developed by Master’s students, enhances AlixPartners' business intelligence by streamlining data analysis and boosting insight accuracy.

Top Stories

Getty Images' copyright claim against Stability AI falters, as the court rules Stable Diffusion isn't an infringing copy, leaving critical legal questions unanswered.

Top Stories

Experts predict a surge in cyberattacks and data breaches, with over 500 incidents reported in 2025, urging firms like Pure Storage and Fujitsu to...

AI Finance

Curvestone AI partners with White Rose Finance Group to automate compliance, reducing file review times from hours to minutes and enhancing accuracy in financial...

AI Finance

Curvestone AI partners with White Rose Finance to automate compliance reviews, cutting review time from hours to minutes and enhancing oversight by 90%.

AI Research

NFER warns that up to 3 million low-skilled jobs in the UK could vanish by 2035 due to AI, while demand for highly skilled...

AI Government

UK government unveils £100M initiative to boost AI hardware startups, aiming to position British technology alongside leading global vendors and create 5,000 jobs.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.