Connect with us

Hi, what are you looking for?

AI Cybersecurity

Singapore Firms Report 58% AI Security Control Gaps Despite 87% AI Deployment, Study Reveals

Singapore firms face a 58% gap in AI security controls despite 87% deploying AI technologies, highlighting urgent risks in cybersecurity preparedness.

Proofpoint’s recent research highlights that many organisations in Singapore are grappling with AI-related security incidents despite implementing controls designed to safeguard against such risks. The study underscores a troubling lag between the rapid deployment of AI technologies and the capability of security teams to effectively monitor and respond to emerging threats.

The findings reveal that 87% of organisations in Singapore have moved AI assistants beyond pilot programs, while 70% are either piloting or rolling out autonomous agents. Conducted among over 1,400 full-time security professionals across 12 countries, the survey suggests that AI tools have become integral to routine business functions, including customer support, internal communications, email workflows, and third-party collaborations. However, security governance appears to be struggling to keep pace with this widespread adoption.

In Singapore, a significant 58% of organisations express a lack of confidence in their AI security systems’ ability to detect a compromised AI. Among those that have already put AI security measures into place, half reported experiencing either confirmed or suspected AI-related incidents. The readiness to respond to these incidents appears even weaker, with only 32% affirming they are fully prepared to investigate occurrences related to AI or its agents. Additionally, 51% of respondents indicated they have difficulty correlating threats across different communication channels.

The attack surface for potential threats is notably vast. Email is identified as the most prevalent vector, cited by 58% of respondents, but exposure also extends to software-as-a-service (SaaS) applications, cloud platforms, collaboration tools such as Teams and Slack, as well as the AI assistants and agents themselves. Among those that had already encountered AI-related incidents, the spread of these incidents across various channels was broader—61% involved file-sharing platforms and 58% involved collaboration tools. This is particularly concerning as AI systems frequently interact with multiple business tools concurrently, necessitating a comprehensive view of activities across connected environments to accurately reconstruct incidents.

Tool sprawl further complicates matters. An overwhelming 98% of organisations in Singapore reported that managing multiple security tools is at least moderately challenging, with 61% categorising it as very or extremely difficult. Respondents cited integration issues and visibility gaps as significant barriers that can delay incident response, particularly at a time when AI systems can exacerbate errors or malicious actions much faster than traditional manual processes.

The research points to a disconnect between confidence in AI’s business utility and confidence in the security measures surrounding it. Although 58% of Singaporean organisations reported having AI security coverage, many acknowledged significant weaknesses in training, governance alignment across teams, and monitoring capabilities. Specifically, 55% highlighted gaps in training, 45% cited issues with governance alignment, and 43% noted insufficient monitoring or logging practices. These deficiencies can hinder companies’ abilities to detect whether AI systems have been manipulated or are improperly handling sensitive data.

Ryan Kalember, Proofpoint’s chief strategy officer, noted that the findings illustrate a widening gap between AI adoption and security preparedness. “This year’s findings highlight a widening divide between AI adoption and security readiness,” he stated. “Organisations are scaling AI assistants and autonomous agents across core workflows, yet many cannot confirm their controls are effective or fully investigate incidents that move across collaboration channels.” As AI becomes further embedded in operational processes, security leaders are urged to rethink protection strategies for trusted interactions among people, data, and AI systems.

Proofpoint further argues that while AI introduces new risks, such as prompt engineering, it primarily amplifies existing security vulnerabilities, including running unverified code and mishandling sensitive information. “AI executes them at machine speed and scale,” Kalember said. “When organisations hand AI the keys to act on their behalf—across customers, partners, and internal systems—the blast radius of any one of those failures grows dramatically.” He advocates for applying rigorous controls to AI interactions rather than treating AI as a novel threat category.

The findings from Singapore are particularly striking, given the city-state’s status as a regional hub for digital investment and AI adoption. Among respondents, 51% are actively pursuing vendor and tool consolidation, while 58% believe a unified security platform is more effective than individual point solutions. Looking ahead, 64% of organisations intend to bolster AI protections, and 61% plan to extend security measures across collaboration channels.

George Lee, senior vice president for Asia Pacific and Japan at Proofpoint, emphasised the need for stronger governance surrounding AI use and data access in Singapore. “The organisations that will move fastest and safest will be those that improve data visibility, govern AI agents with the same discipline as privileged users, and reduce the blind spots created by fragmented security tools,” he said. As organisations continue to scale AI technologies, establishing robust security frameworks will be essential to mitigating risks and ensuring safe operations.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

AI Technology

Amazon formalizes AI integration with six engineering tenets to enhance operational efficiency and accountability across its retail division.

AI Regulation

Generative AI tools are now utilized by 98% of legal professionals in Australia, transforming law practice and education for future lawyers.

AI Tools

China penalizes three online platforms for failing to label AI-generated content, intensifying efforts to combat misinformation as generative AI activities soar to 602 million...

AI Technology

One in five organizations faces costly data breaches linked to shadow AI as developers turn to unapproved tools for efficiency, averaging $670,000 per incident.

AI Technology

RISC-V's new NPU integration methods, including a unified compute engine achieving 1.87× speedup, position it as a game-changer in AI hardware design.

Top Stories

Meta's failed acquisition of AI start-up Manus underscores China's ambitions in AI, while DeepSeek's V4 struggles to meet industry benchmarks, raising competitive concerns.

AI Marketing

SAS enhances its Viya data platform with in-place analytics and governance tools, addressing AI adoption barriers and empowering marketing teams to optimize campaigns.

Top Stories

Perplexity CEO Aravind Srinivas asserts that AI will elevate the iPhone into a vital "digital passport," amplifying its role in users' lives amid evolving...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.