Proofpoint’s recent research highlights that many organisations in Singapore are grappling with AI-related security incidents despite implementing controls designed to safeguard against such risks. The study underscores a troubling lag between the rapid deployment of AI technologies and the capability of security teams to effectively monitor and respond to emerging threats.
The findings reveal that 87% of organisations in Singapore have moved AI assistants beyond pilot programs, while 70% are either piloting or rolling out autonomous agents. Conducted among over 1,400 full-time security professionals across 12 countries, the survey suggests that AI tools have become integral to routine business functions, including customer support, internal communications, email workflows, and third-party collaborations. However, security governance appears to be struggling to keep pace with this widespread adoption.
In Singapore, a significant 58% of organisations express a lack of confidence in their AI security systems’ ability to detect a compromised AI. Among those that have already put AI security measures into place, half reported experiencing either confirmed or suspected AI-related incidents. The readiness to respond to these incidents appears even weaker, with only 32% affirming they are fully prepared to investigate occurrences related to AI or its agents. Additionally, 51% of respondents indicated they have difficulty correlating threats across different communication channels.
The attack surface for potential threats is notably vast. Email is identified as the most prevalent vector, cited by 58% of respondents, but exposure also extends to software-as-a-service (SaaS) applications, cloud platforms, collaboration tools such as Teams and Slack, as well as the AI assistants and agents themselves. Among those that had already encountered AI-related incidents, the spread of these incidents across various channels was broader—61% involved file-sharing platforms and 58% involved collaboration tools. This is particularly concerning as AI systems frequently interact with multiple business tools concurrently, necessitating a comprehensive view of activities across connected environments to accurately reconstruct incidents.
Tool sprawl further complicates matters. An overwhelming 98% of organisations in Singapore reported that managing multiple security tools is at least moderately challenging, with 61% categorising it as very or extremely difficult. Respondents cited integration issues and visibility gaps as significant barriers that can delay incident response, particularly at a time when AI systems can exacerbate errors or malicious actions much faster than traditional manual processes.
The research points to a disconnect between confidence in AI’s business utility and confidence in the security measures surrounding it. Although 58% of Singaporean organisations reported having AI security coverage, many acknowledged significant weaknesses in training, governance alignment across teams, and monitoring capabilities. Specifically, 55% highlighted gaps in training, 45% cited issues with governance alignment, and 43% noted insufficient monitoring or logging practices. These deficiencies can hinder companies’ abilities to detect whether AI systems have been manipulated or are improperly handling sensitive data.
Ryan Kalember, Proofpoint’s chief strategy officer, noted that the findings illustrate a widening gap between AI adoption and security preparedness. “This year’s findings highlight a widening divide between AI adoption and security readiness,” he stated. “Organisations are scaling AI assistants and autonomous agents across core workflows, yet many cannot confirm their controls are effective or fully investigate incidents that move across collaboration channels.” As AI becomes further embedded in operational processes, security leaders are urged to rethink protection strategies for trusted interactions among people, data, and AI systems.
Proofpoint further argues that while AI introduces new risks, such as prompt engineering, it primarily amplifies existing security vulnerabilities, including running unverified code and mishandling sensitive information. “AI executes them at machine speed and scale,” Kalember said. “When organisations hand AI the keys to act on their behalf—across customers, partners, and internal systems—the blast radius of any one of those failures grows dramatically.” He advocates for applying rigorous controls to AI interactions rather than treating AI as a novel threat category.
The findings from Singapore are particularly striking, given the city-state’s status as a regional hub for digital investment and AI adoption. Among respondents, 51% are actively pursuing vendor and tool consolidation, while 58% believe a unified security platform is more effective than individual point solutions. Looking ahead, 64% of organisations intend to bolster AI protections, and 61% plan to extend security measures across collaboration channels.
George Lee, senior vice president for Asia Pacific and Japan at Proofpoint, emphasised the need for stronger governance surrounding AI use and data access in Singapore. “The organisations that will move fastest and safest will be those that improve data visibility, govern AI agents with the same discipline as privileged users, and reduce the blind spots created by fragmented security tools,” he said. As organisations continue to scale AI technologies, establishing robust security frameworks will be essential to mitigating risks and ensuring safe operations.
See also
Anthropic’s Claims of AI-Driven Cyberattacks Raise Industry Skepticism
Anthropic Reports AI-Driven Cyberattack Linked to Chinese Espionage
Quantum Computing Threatens Current Cryptography, Experts Seek Solutions
Anthropic’s Claude AI exploited in significant cyber-espionage operation
AI Poisoning Attacks Surge 40%: Businesses Face Growing Cybersecurity Risks



















































