The cybersecurity industry spent the past year debating how to authenticate AI agents. The harder question - what happens when a compromised agent controls something that can physically harm people - is now moving from theory to documented incident.
A convergence of research published at RSAC 2026 maps the full chain: from structural gaps in how enterprises manage agent identities, through real-world attacks that exploited those gaps, to the regulatory and architectural responses now taking shape. Taken together, these signals argue that identity and access management for AI agents is not merely an IT governance challenge. For organizations where agents interact with physical systems, it is safety infrastructure.
The Data: Agents Are Everywhere, Controls Are Not
The Cloud Security Alliance's latest survey, commissioned by Aembit and based on 228 responses from IT and security professionals, quantifies the structural problem. Eighty-five percent of organizations already run AI agents in production environments, yet 68% cannot clearly distinguish between human and AI agent activity. 1Cloud Security Alliance: More Than Two-Thirds of Organizations Cannot Clearly Distinguish AI Agent from Human Actions Task automation agents (67%), research agents (52%), and developer-assist agents (50%) are the most common deployment types. 1Cloud Security Alliance: More Than Two-Thirds of Organizations Cannot Clearly Distinguish AI Agent from Human Actions
The identity architecture supporting these agents is improvised. Fifty-two percent of organizations assign workload identities to agents, 43% use shared service accounts, and 31% allow agents to operate under human user identities. 1Cloud Security Alliance: More Than Two-Thirds of Organizations Cannot Clearly Distinguish AI Agent from Human Actions The consequence: 74% of respondents say agents frequently receive more access than they need, and 79% say agents create new access pathways that are difficult to monitor. 1Cloud Security Alliance: More Than Two-Thirds of Organizations Cannot Clearly Distinguish AI Agent from Human Actions
Ownership of the problem is diffuse. Security teams lead in 28% of organizations, followed by development/engineering (21%) and IT (19%). Only 9% of organizations identify IAM teams as the primary owner of AI agent identity and access. 1Cloud Security Alliance: More Than Two-Thirds of Organizations Cannot Clearly Distinguish AI Agent from Human Actions A separate Cisco Duo report reinforces this pattern: 74% of IT leaders admit identity security is often an afterthought in infrastructure planning. 2Cybersecurity Dive: Top 3 Factors for Selecting an Identity Access Management Tool
From Credential Theft to Physical Consequence
These gaps become safety-critical when agents operate in cyber-physical environments. A CSA analysis published as part of its seven-part blog series on identity security traces the attack chain from stolen credentials to real-world damage. 3CSA Blog: AI Security — When Agents Control Physical Systems, IAM Becomes Safety Infrastructure
The blog series cites three 2025 incidents that illustrate this escalation. In September 2025, Anthropic disclosed that a Chinese state-sponsored group weaponized Claude Code for what researchers described as the first large-scale cyberattack run primarily by an AI agent. The agent autonomously scanned for vulnerabilities, wrote exploits, harvested credentials, and moved laterally through networks - including chemical manufacturing companies where compromised process controls could have had catastrophic physical consequences. 3CSA Blog: AI Security — When Agents Control Physical Systems, IAM Becomes Safety Infrastructure
Separately, Jaguar Land Rover suffered a credential compromise through a supplier that shut down factories for five weeks at a reported cost of £1.9 billion - widely considered the most economically damaging cyberattack in UK history. 3CSA Blog: AI Security — When Agents Control Physical Systems, IAM Becomes Safety Infrastructure The CSA analysis frames this as the baseline scenario: the same lateral movement pattern, executed by an AI agent at machine speed, is the emerging threat model.
A third incident demonstrated a different vector. Researchers showed that a poisoned Google Calendar invite could hijack Gemini to control smart home devices - lights, shutters, boilers - by exploiting the gap between calendar-reading permissions and actuator-control permissions. 3CSA Blog: AI Security — When Agents Control Physical Systems, IAM Becomes Safety Infrastructure The mechanism is the same authorization failure at a different scale.
Why Traditional IAM Falls Short
Traditional IAM operates on assumptions that AI agents violate by design. Role-based access control assumes predictable behavior within defined sessions. AI agents are non-deterministic: they reason, call APIs, move data, trigger workflows, and make decisions autonomously. A standard review of IAM tool selection criteria published by Cybersecurity Dive emphasizes that organizations frequently adopt identity solutions as afterthoughts, scrambling to retrofit controls once agents are already in production. 2Cybersecurity Dive: Top 3 Factors for Selecting an Identity Access Management Tool
The CSA survey data confirms this reactive posture. The most common incident containment action is disabling identities or revoking tokens (49%), while only 33% of organizations can modify access policies in real time. 1Cloud Security Alliance: More Than Two-Thirds of Organizations Cannot Clearly Distinguish AI Agent from Human Actions For agents operating in cyber-physical environments - where a compromised action could manipulate chemical concentrations or trigger pressure failures - after-the-fact containment is inadequate by definition.
Authorization Architecture as an Engineered Constraint
The CSA blog series advocates an architectural shift: treating authorization as an engineered safety constraint rather than an administrative control. The analogy is drawn from high-reliability industries. Aviation does not rely on pilots to remember altitude limits. Nuclear facilities do not trust operators to avoid unsafe configurations. For AI agents controlling physical systems, authorization must be similarly automatic and boundary-enforcing. 3CSA Blog: AI Security — When Agents Control Physical Systems, IAM Becomes Safety Infrastructure
The technical mechanism proposed is OAuth 2.0 Token Exchange (RFC 8693), which enables delegation tokens that preserve the full chain from human authorizer to agent to specific action. Credentials are issued just-in-time, scoped to the immediate operation (e.g., "actuator:write" for one specific resource), and expired within minutes. 3CSA Blog: AI Security — When Agents Control Physical Systems, IAM Becomes Safety Infrastructure As the CSA series puts it: "Tokens that don't exist can't be stolen."
The EU AI Act adds regulatory weight. Systems used as safety components in critical infrastructure - water, gas, electricity - are classified as high-risk under Annex III, triggering human oversight requirements under Article 14. For agents making thousands of decisions per minute, compliance requires authorization systems that enforce boundaries programmatically and escalate only genuinely exceptional conditions. 3CSA Blog: AI Security — When Agents Control Physical Systems, IAM Becomes Safety Infrastructure
Looking Ahead
The HYPR 2026 State of Passwordless Identity Assurance report, surveying over 950 security leaders, confirms that the threat landscape is compounding the urgency: 53% of organizations now cite generative AI as their primary identity security concern, overtaking stolen credentials for the first time. 4HYPR: The State of Passwordless Identity Assurance 2026 Meanwhile, industry frameworks are emerging - from Mastercard's Verifiable Intent standard for agent transactions to Cisco's DefenseClaw framework for agent lifecycle security - but enterprise adoption remains early.
Gartner projects that AI agents will reduce the time to exploit account exposures by 50% by 2027. 3CSA Blog: AI Security — When Agents Control Physical Systems, IAM Becomes Safety Infrastructure For organizations where agents touch physical infrastructure, the question posed by the CSA is stark: for every agent with access to critical systems, can your team articulate its authorization envelope - what credentials it holds, what those credentials permit, and whether those permissions exceed operational need? If the answer is "we don't know," that gap is the attack surface.
Bild: Jakub Żerdzicki / Unsplash
