Connect with us

Hi, what are you looking for?

AI Cybersecurity

27% of IT Leaders Fear Deepfake Attacks Amid AI Governance Gaps in Ireland and UK

27% of IT leaders in Ireland and the UK are alarmed by deepfake threats, highlighting significant governance gaps amid rapid AI adoption, according to Storm Technology.

More than a quarter of IT leaders in Ireland and the UK are concerned about the challenges posed by deepfake technology, with 27% expressing fears over their ability to detect such attacks in the coming year. This statistic emerges from a survey conducted by Storm Technology, now part of Littlefish, which involved 200 IT decision-makers and underscores the growing apprehensions surrounding the security implications of rapid AI adoption.

The survey results indicate that the anxiety surrounding deepfake detection is particularly heightened among larger enterprises, where 33% of respondents reported concerns, compared to 23% in smaller businesses. Data breaches were identified as the most pressing issue, cited by 34% of IT leaders, followed closely by concerns over data protection (33%) and the risks associated with adversarial cyber-attacks (31%).

In addition to deepfake threats, the prevalence of shadow AI—defined as the use of unsanctioned or unapproved tools—has emerged as a significant concern among IT leaders. One in four respondents listed this as a top worry, while half acknowledged that employees within their organizations are using such tools. Notably, 55% of respondents admitted to utilizing unsanctioned AI platforms themselves, and 42% expressed doubts about the safety of their company data when inputting it into these applications. Only 60% of organizations have established clear guidelines regarding which AI tools are permitted for use.

The survey also highlighted substantial governance gaps. Almost one-third of companies lack a strategy to manage risks associated with AI, and 21% of IT leaders do not have a high degree of trust in AI tools. Among Irish respondents, the concern is even more pronounced, with 35% believing their governance measures are inadequate, compared to 28% overall. Approximately four in five participants agreed that their organizations need to enhance regulations governing AI tools.

Data readiness poses an additional challenge, with a quarter of IT leaders indicating that their business data is not adequately prepared for AI applications. Furthermore, 23% reported that their data governance policies are insufficient to support secure AI adoption. As a result, 78% believe that a dedicated project focused on data readiness is essential.

Sean Tickle, Cyber Services Director at Littlefish, emphasized the urgency of addressing these issues, stating, “AI is rapidly reshaping the enterprise landscape, but the speed of adoption is outpacing the maturity of governance. When nearly a third of organizations lack a strategy to manage AI risk, and over half of IT leaders admit to using unsanctioned tools, it’s clear that shadow AI isn’t just a user issue – it’s a leadership one.”

Tickle further noted, “Deepfake threats, data governance gaps, and a lack of trust in AI platforms are converging into a perfect storm. To stay secure and competitive, businesses must invest in visibility, policy clarity, and data readiness – because without those, AI becomes a liability, not a differentiator.”

The findings from this survey reflect a broader trend in the evolving landscape of technology, where the rapid adoption of AI must be matched with adequate governance and security measures. As organizations strive to harness the benefits of artificial intelligence, addressing these challenges will be crucial to mitigating risks and ensuring sustainable growth in the digital age.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

AI Regulation

UK regulators, led by the CMA and ICO, prioritize fostering AI innovation through regulatory sandboxes while addressing competition concerns and public safety.

AI Government

RCP reveals 70% of UK physicians endorse AI in NHS, urging immediate government action to overhaul digital infrastructure for enhanced patient care.

Top Stories

Liverpool City Region seeks AI experts to form a taskforce by January 2026, aiming to ethically harness AI for its 1.6 million residents' benefit.

AI Research

UK universities are closing humanities programs, endangering vital AI user trust research as PhD students like Chris Tessone face an uncertain future.

Top Stories

In 2026, the UK introduces the Employment Rights Act with over 30 reforms, while the EU mandates pay transparency to combat gender pay gaps...

AI Government

Ireland's government to fast-track fines for tech firms misusing AI following backlash against Elon Musk's Grok bot, which allowed image manipulation of minors.

AI Tools

Ofcom investigates Elon Musk's X for potentially breaching UK law over Grok's AI-generated non-consensual images, risking fines up to £18 million.

AI Generative

Locai Labs halts image generation services and bans users under 18, as CEO James Drayson warns all AI models risk producing harmful content.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.