The Numbers Paint a Clear Picture
A new Cloud Security Alliance survey, commissioned by Zenity and based on 445 responses from IT and security professionals, quantifies what has been an emerging concern throughout 2026: AI agent behavior is outpacing the controls meant to govern it.
Fifty-three percent of organizations report that AI agents have exceeded their intended permissions. 1More Than Half of Organizations Experience AI Agent Scope Violations, Cloud Security Alliance Study Finds Nearly half - 47% - experienced a security incident involving an AI agent in the past year, with detection and response times often stretching to hours or days. 1More Than Half of Organizations Experience AI Agent Scope Violations, Cloud Security Alliance Study Finds Only 16% of respondents said they had high confidence in their ability to detect agent-specific threats, and just 8% said their agents never exceeded intended scope. 1More Than Half of Organizations Experience AI Agent Scope Violations, Cloud Security Alliance Study Finds
Shadow agents compound the problem. Fifty-four percent of organizations report between 1 and 100 unsanctioned AI agents operating in their environment, with ownership frequently unclear - only 15% said that 76-100% of their agents have defined ownership. 1More Than Half of Organizations Experience AI Agent Scope Violations, Cloud Security Alliance Study Finds
"The findings highlight gaps in visibility, runtime controls, and action traceability," said Hillary Baron, AVP of Research at CSA. 1More Than Half of Organizations Experience AI Agent Scope Violations, Cloud Security Alliance Study Finds
The Shared Workspace Problem: Authorized Retrieval, Unauthorized Recipients
A separate CSA blog post published this week dissects why these scope violations happen so routinely. The answer lies in a fundamental mismatch between how authorization was designed and how AI agents actually operate. 2When AI Agents Serve Shared Workspaces, Authorization Must Follow the Audience (CSA Blog Series)
OAuth - the protocol underpinning most enterprise authorization - was built for a one-user, one-application model. An agent, however, authenticates with one user's credentials but outputs results into shared contexts: Slack channels, Teams workspaces, shared dashboards. The authorization check happens at the point of data retrieval, not at the point of output.
The CSA analysis cites the OpenID Foundation's example: a CFO deploys an agent in a Slack channel. The agent inherits the CFO's access and retrieves compensation data when a junior analyst asks about the Q3 budget. Authorization passes - the CFO can access those files. But the agent posts the answer to the entire channel, and now every member knows the CEO's salary. 2When AI Agents Serve Shared Workspaces, Authorization Must Follow the Audience (CSA Blog Series)
This is not a theoretical scenario. Between June and October 2025, four critical-severity vulnerabilities (CVSS 9.3-9.4) in Anthropic's Slack MCP Server, Microsoft 365 Copilot, ServiceNow's AI Platform, and Salesforce Agentforce all followed the same pattern: authorized retrieval, unauthorized recipients. 2When AI Agents Serve Shared Workspaces, Authorization Must Follow the Audience (CSA Blog Series) In each case, the platform verified whether the invoking user could access the data but failed to verify whether all recipients of the output were authorized to see it.
Curity's Runtime Approach: Tokens That Carry Intent
Against this backdrop, Sweden-based Curity this week announced Access Intelligence, an extension to its Identity Server platform designed to enforce authorization at runtime rather than at initial authentication. 3Curity looks to reinvent IAM with runtime authorization for AI agents (CSO Online)
The core idea: instead of granting agents broad, static permissions at deployment, Curity's system issues a separate, purpose-specific OAuth token for each action an agent performs, carrying metadata about the agent's intent and scope. 3Curity looks to reinvent IAM with runtime authorization for AI agents (CSO Online) When an agent starts a new task, it requests a new token with a new set of permissions. If an action is classified as high-risk - such as transferring funds - human authorization can be required before the token is issued.
"Because we let an agent do something now doesn't mean we should be allowing it to do this a minute later," said Curity CTO Jacob Ideskog. 3Curity looks to reinvent IAM with runtime authorization for AI agents (CSO Online)
The approach contrasts with both traditional API gateways, which apply static rules, and out-of-band monitoring systems that analyze agent behavior after the fact. Curity's system acts as an inline microservice through which every agent request must pass, validating in real time whether the requested action is still within scope. 3Curity looks to reinvent IAM with runtime authorization for AI agents (CSO Online)
Ideskog acknowledged that no single solution covers the full agent security problem. "Up to this point, the IAM industry has focused on the identity part. But the real question is the access," he said, adding that privilege access management vendors "don't have good answers yet." 3Curity looks to reinvent IAM with runtime authorization for AI agents (CSO Online)
Governance Is Lagging Adoption
The CSA/Zenity survey underscores the governance gap. While 50% of organizations report having at least partially documented governance policies for AI agents, only 31% have formally adopted a policy, and just 13% feel highly prepared for upcoming AI-related regulations. 1More Than Half of Organizations Experience AI Agent Scope Violations, Cloud Security Alliance Study Finds HIPAA (43%), the NIST AI Risk Management Framework (37%), and SOC 2/ISO 27001 (34%) are the most commonly cited governance influences. 1More Than Half of Organizations Experience AI Agent Scope Violations, Cloud Security Alliance Study Finds
The regulatory pressure is real. The CSA blog analysis notes that GDPR Articles 5 and 32, CCPA Section 1798.150, and Sarbanes-Oxley Section 404 all have provisions that could apply when AI agents expose data to unauthorized internal recipients - with penalties ranging from statutory damages to fines of up to 4% of global annual revenue. 2When AI Agents Serve Shared Workspaces, Authorization Must Follow the Audience (CSA Blog Series)
Looking Ahead
The convergence is clear: survey data confirms that scope violations are routine, technical analysis explains the architectural root cause, and vendors are beginning to ship products that address it. But enterprise adoption remains early. Forty-three percent of organizations report that more than half of their employees use AI agents regularly, spanning IT, security, customer service, and engineering. 1More Than Half of Organizations Experience AI Agent Scope Violations, Cloud Security Alliance Study Finds The deployment curve is steep, and the authorization architecture beneath it was built for a different era.
The question CSA poses to security teams is a useful stress test: for every AI agent deployed in a shared workspace, can you demonstrate that its output is restricted to data that every member of that workspace is authorized to see? For most organizations, the honest answer is still no. The tools to fix that are arriving - but the gap between agent deployment speed and authorization maturity is the vulnerability window that matters most right now.
Bild: towel.studio / Unsplash
