The Problem in Numbers
A new survey published by the Cloud Security Alliance (CSA) at RSAC 2026 quantifies what many security teams have suspected: enterprise identity and access management was not built for autonomous AI agents, and organizations are struggling to adapt.
Seventy-three percent of organizations expect AI agents to become vital within the next year, yet 68% cannot clearly distinguish between human and AI agent activity in their environments. 1More Than Two-Thirds of Organizations Cannot Clearly Distinguish AI Agent from Human Actions – Cloud Security Alliance The survey, commissioned by Aembit and based on 228 responses from IT and security professionals, found that 74% of respondents say AI agents often receive more access than necessary, and 79% believe agents create new access pathways that are difficult to monitor. 1More Than Two-Thirds of Organizations Cannot Clearly Distinguish AI Agent from Human Actions – Cloud Security Alliance
The identity patchwork is part of the problem. According to the report, 52% of organizations assign workload identities to agents, 43% rely on shared service accounts, and 31% allow agents to operate under human user identities. The result: agents routinely inherit permissions far beyond their intended role.
"Existing IAM approaches were not designed for autonomous agents and are showing strain as deployments scale," said Hillary Baron, AVP of Research at CSA. 1More Than Two-Thirds of Organizations Cannot Clearly Distinguish AI Agent from Human Actions – Cloud Security Alliance
From Theory to Factory Floor
A companion CSA blog series on identity security contextualizes these findings against real-world incidents from late 2025 that demonstrated how quickly credential-based attacks escalate when AI agents are involved. 2AI Security: When Agents Control Physical Systems, IAM Becomes Safety Infrastructure – Cloud Security Alliance
In September 2025, Anthropic disclosed that a Chinese state-sponsored group had weaponized its Claude Code tool to conduct what researchers described as the first large-scale cyberattack run primarily by an AI agent. 2AI Security: When Agents Control Physical Systems, IAM Becomes Safety Infrastructure – Cloud Security Alliance The agent autonomously scanned for vulnerabilities, wrote exploits, harvested credentials, and moved laterally through targeted networks - including chemical manufacturing companies where compromised process controls could have physical consequences. 3Disrupting the first reported AI-orchestrated cyber espionage campaign – Anthropic
Separately, Jaguar Land Rover suffered what has been called the most economically damaging cyberattack in UK history, with a credential compromise through a supplier shutting down factories for five weeks at a cost of £1.9 billion. 2AI Security: When Agents Control Physical Systems, IAM Becomes Safety Infrastructure – Cloud Security Alliance The CSA analysis points out that the same lateral movement pattern, executed by an AI agent operating at machine speed, represents the emerging threat model.
IBM's X-Force 2025 data reinforces the trend: abusing valid accounts accounted for 30% of all incidents, with an 800% spike in credentials stolen by infostealer malware during the first half of 2025. 2AI Security: When Agents Control Physical Systems, IAM Becomes Safety Infrastructure – Cloud Security Alliance
Nobody Owns the Problem
One of the survey's most striking findings is the fragmentation of responsibility. Only 9% of organizations identify IAM teams as the primary owner of AI agent identity and access, with responsibility split across security (28%), development/engineering (21%), and IT (19%). 1More Than Two-Thirds of Organizations Cannot Clearly Distinguish AI Agent from Human Actions – Cloud Security Alliance
This fragmentation has practical consequences. A third of organizations do not know how often AI agent credentials are rotated. Only 22% report that access frameworks are applied very consistently to AI agents. When incidents occur, the most common containment action is disabling identities or revoking tokens (49%), while only 33% can modify access policies in real time. 1More Than Two-Thirds of Organizations Cannot Clearly Distinguish AI Agent from Human Actions – Cloud Security Alliance
The pattern suggests that most organizations are relying on after-the-fact governance rather than embedded, identity-bound enforcement - a reactive posture that AI-speed attacks are designed to outpace.
Standards Efforts: Building Trust Frameworks for Agents
While the CSA data maps the problem, parallel industry initiatives aim to build the infrastructure for solutions.
Mastercard and Google have unveiled Verifiable Intent, an open, standards-based trust framework designed for "agentic commerce" - scenarios where AI agents plan, decide, and complete purchases autonomously. 4The Defiant: Mastercard and Google Team Up to Build Trust for AI-Powered Shopping – FIDO Alliance The framework uses a layered SD-JWT (Selective Disclosure JSON Web Token) credential format that creates a tamper-evident chain linking a consumer's authorization to an agent's specific actions. 5Verifiable Intent – Open specification for cryptographic agent authorization in commerce (GitHub) The specification, published as an open draft on GitHub with backing from IBM, Fiserv, and Checkout.com, provides cryptographic proof that an agent acted within the bounds of what a human actually authorized.
The CSA blog series advocates a similar architectural philosophy for cyber-physical systems, pointing to OAuth 2.0 Token Exchange (RFC 8693) as a mechanism for delegation tokens that preserve the chain from human to agent to action. The principle is the same: credentials should be scoped to the immediate operation, issued just-in-time, and expired quickly. "Tokens that don't exist can't be stolen," as the CSA series puts it. 2AI Security: When Agents Control Physical Systems, IAM Becomes Safety Infrastructure – Cloud Security Alliance
The EU AI Act adds regulatory pressure. Systems used as safety components in critical infrastructure - water, gas, electricity - are classified as high-risk under Annex III, triggering human oversight requirements under Article 14. For agents making thousands of decisions per minute, this means authorization systems must enforce boundaries programmatically and escalate only genuinely exceptional conditions. 2AI Security: When Agents Control Physical Systems, IAM Becomes Safety Infrastructure – Cloud Security Alliance
What This Means for Security Teams
The convergence of these signals - survey data, real-world incidents, and emerging standards - points to a clear set of priorities:
- Inventory agent identities. Many organizations do not have a definitive count of AI agents operating in production, let alone a map of their permissions. The CSA data shows 85% already use agents in production environments.
- Eliminate identity inheritance. Agents borrowing human identities or shared service accounts is the fastest path to over-privileged access. Dedicated, scoped identities are the baseline.
- Assign ownership. With responsibility fragmented across four or more teams, establishing a single accountable function for agent IAM is a governance prerequisite.
- Adopt time-bound, scoped credentials. Whether through OAuth token exchange or frameworks like Verifiable Intent, the direction is toward credentials that are narrow in scope and short in lifespan.
Looking Ahead
Gartner projects that AI agents will reduce the time to exploit account exposures by 50% by 2027. 2AI Security: When Agents Control Physical Systems, IAM Becomes Safety Infrastructure – Cloud Security Alliance The organizations that treat agent identity as a first-class security domain now - rather than after a breach forces the issue - will have a structural advantage as both threats and regulations accelerate. The standards are emerging. The survey data shows most enterprises have not yet caught up.
