Artificial intelligence agents are proliferating within enterprise SaaS environments more rapidly than security teams can monitor, leaving many organizations unaware of the extent of access they have inadvertently granted.
In August 2025, a security breach allowed attackers to infiltrate Salesforce environments at over 700 organizations, including notable firms like Cloudflare, Palo Alto Networks, and Zscaler. Remarkably, the breach did not involve exploiting any vulnerabilities or phishing attempts; instead, the attackers leveraged OAuth tokens from Drift, an AI-powered chatbot connected to Salesforce installations. Following a compromise of Salesloft’s internal systems, the stolen tokens transformed every downstream connection into a pathway for intrusion, mimicking normal software behavior from the system’s perspective.
This incident underscores a widespread governance issue regarding the integration of AI in enterprises. A survey conducted in March 2026 by security firm Vorlon revealed that 99.4% of 500 U.S. Chief Information Security Officers experienced at least one security incident related to SaaS or AI ecosystems in 2025. Only three organizations reported no incidents, yet 89.2% of the same CISOs expressed confidence in their OAuth governance, exposing a significant gap between perceived security and actual outcomes. The report emphasized that the issue is not one of awareness but rather a failure of architectural oversight.
Part of the challenge lies in the seemingly innocuous way AI is integrated into workflows. Employees often connect AI tools—such as writing assistants to email accounts or coding agents to repositories—viewing these choices as productivity enhancements rather than security risks. These access points are rarely subjected to formal review, allowing AI agents to begin operating immediately and without scrutiny.
“The most perilous dynamic here is that, unlike a dormant shadow IT application, an AI agent is perpetually active,” stated Gal Nakash, co-founder and Chief Product Officer at Reco, a SaaS and AI security platform. “It reads, writes, summarizes, and interacts—making the risk dynamic rather than static.”
Existing security tools have struggled to adapt to this evolving landscape. Cloud Access Security Brokers (CASBs) were designed for environments where the primary threat was unauthorized access by human employees, focusing on policy enforcement at the network layer and identifying behavioral anomalies typical of human actions. However, AI agents authenticate via OAuth tokens and API keys, allowing them to operate continuously across multiple systems, often without requiring users to log in repeatedly. This incongruity means traditional security measures may overlook an AI agent quietly amassing excessive access rights.
Nakash emphasized that a fundamentally different approach is necessary. Reco’s platform does not merely monitor the perimeter but instead maps both human and non-human identities within an organization’s SaaS ecosystem, setting behavioral baselines for each. When an AI agent interacts with systems or data outside its expected parameters, the platform flags these anomalies. “While CASBs monitor the front door, Reco observes what’s already inside,” he noted.
The findings from such monitoring can be surprising. Nakash highlighted a frequent scenario involving an AI meeting assistant that multiple employees had independently connected to their Microsoft 365 accounts. This assistant accumulated read access to the inboxes and calendars of over 40 personnel, including executives and legal team members. Although the tool itself was benign, the vendor’s data retention policy was unclear, resulting in a compliance exposure that went unnoticed until Reco’s mapping revealed the breach.
Once the security team identified the issue, remediation was straightforward: the overly broad OAuth grants were revoked, access was reconfigured under a restricted IT-managed setup, and an approval process was established to ensure future AI tool connections were subject to security reviews. “Within the first month, the firm reduced its exposure to third-party AI agents by over 60%,” Nakash explained.
As the integration of AI in enterprise applications accelerates, organizations face increasing risks. By the end of 2026, Gartner predicts that 40% of enterprise applications will incorporate task-specific AI agents, a significant rise from less than 5% today. Additionally, IBM’s 2025 Cost of a Data Breach Report indicated that organizations with high levels of shadow AI incurred an average of $670,000 more per breach compared to those without. Reco’s research further revealed that 91% of AI tools are currently being used without IT oversight or approval, highlighting a critical vulnerability in many organizations.
The AI agents operating within enterprise SaaS environments are not inherently malicious; they are performing as intended. The pressing issue is that in most organizations, there has been no clear designation of who is responsible for monitoring these agents, a decision that is now crucial as enterprises navigate a rapidly evolving digital landscape.
See also
Anthropic’s Claims of AI-Driven Cyberattacks Raise Industry Skepticism
Anthropic Reports AI-Driven Cyberattack Linked to Chinese Espionage
Quantum Computing Threatens Current Cryptography, Experts Seek Solutions
Anthropic’s Claude AI exploited in significant cyber-espionage operation
AI Poisoning Attacks Surge 40%: Businesses Face Growing Cybersecurity Risks




















































