Organisations across the Asia Pacific and Japan (APJ) are rapidly integrating artificial intelligence (AI) into their business operations, according to new data from Okta’s Oktane on the Road events. While the adoption of AI tools is accelerating, concerns regarding governance, accountability, and identity controls appear to lag behind, raising significant security risks.
Findings from Okta’s live AI security poll, conducted in Australia, Singapore, and Japan, suggest a widening gap between the deployment of AI systems and the preparedness of organisations to manage the associated risks. The data highlights a regional shift from focusing solely on human access to addressing the needs of a growing array of non-human identities, including AI agents, bots, and service accounts.
This shift is fundamentally altering security risk profiles. As AI systems increasingly engage with data, initiate workflows, and support decision-making, organisations must extend their identity and access management (IAM) strategies beyond human users to encompass autonomous digital systems. However, the survey indicates that many companies are struggling with this transition.
One key theme emerging from the poll results is the unclear ownership of AI-related security risks. In Australia, only 10% of respondents stated that their identity systems were fully equipped to secure non-human identities, while 52% claimed they were partially prepared. Alarmingly, 41% reported that no single individual or team is accountable for managing AI security. Similar patterns emerged in Singapore and Japan, where accountability for AI risk often resides across multiple functions or remains undefined.
This fragmentation contributes to the rise of “shadow AI,” the use of unapproved or unsupervised AI tools within organisations. Shadow AI was cited as the top security concern in Australia (35%) and Singapore (33%), while data leakage was the primary issue in Japan (36%), followed by unapproved AI agents. The visibility into AI systems’ behaviours post-deployment further complicates the landscape; less than one-third of respondents expressed confidence in their ability to monitor when an AI agent operates outside its intended scope. Confidence levels were particularly low in Australia (18%) and Japan (8%), indicating a significant gap in effective monitoring mechanisms for autonomous systems.
The poll also reveals a widespread lack of readiness within existing IAM frameworks. Across Australia, Singapore, and Japan, fewer than 10% of respondents believe their identity systems are fully equipped to manage and secure non-human identities. Most organisations described their current capabilities as only partially prepared, suggesting that many have yet to adapt to the scale and complexity that AI-driven access requires.
This presents a structural challenge. AI systems necessitate credentials and permissions to interact with applications and data, yet many IAM systems were primarily designed for human users. Consequently, AI agents may inherit excessive access, operate without adequate auditability, or fall outside established governance processes.
The data indicates that the issue is increasingly recognized at senior leadership levels, although engagement levels differ by country. In Australia, 70% of boards were reported to be aware of AI-related security risks, but only 28% were considered fully engaged. In Singapore, board awareness stood at 50%, with 31% fully engaged, while Japan exhibited the highest levels of awareness (78%) and engagement (43%), attributed to regulatory expectations and a strong emphasis on data integrity within organisations.
The discrepancy between awareness and engagement suggests that while AI risks are acknowledged, governance frameworks are still evolving and are not consistently integrated across leadership structures. The findings indicate an imbalance between the rapid pace of AI adoption and organisational readiness to govern these technologies effectively.
As AI agents and automated systems increasingly embed themselves in operational workflows, many organisations face challenges related to accountability structures, visibility, and mature identity controls. The poll data show that non-human identities have become a significant security consideration, yet most organisations remain only partially equipped to manage them. As AI plays a larger role in accessing data, executing processes, and aiding decision-making, the need for robust IAM strategies becomes ever more critical.
The results stem from live, interactive polls conducted during Okta’s Oktane on the Road event series in Sydney, Melbourne, Tokyo, and Singapore in October and November 2025. As AI adoption continues to accelerate across the APJ region, the evolution of organisational controls and governance mechanisms remains paramount in safeguarding against emerging threats.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health
















































