NullSec.news// Cyber news for anyone

AI Agent Security: Three CSA Reports Converge on Identity, Scope, and Trust Chain Failures

Three Cloud Security Alliance research efforts released in the same week reveal a unified problem: AI agents are operating at enterprise scale with borrowed identities, excessive permissions, and invisible trust chains - and most organizations lack the controls to detect or contain the resulting risks.

AI Agent Security: Three CSA Reports Converge on Identity, Scope, and Trust Chain Failures
// mode

Three Cloud Security Alliance research publications released between April 16 and April 20 - each commissioned by a different vendor but analyzed by CSA's own research team - converge on a single conclusion: the enterprise AI agent security problem is structural, not incremental. Identity confusion, scope violations, and invisible trust chains are three facets of the same architectural gap.

Borrowed Identities, Inherited Risk

The CSA/Aembit identity report, updated April 20, finds that most AI agents do not operate as distinct identities. Instead, organizations assign them workload identities, shared service accounts, or - in 31% of cases - let agents operate under a human user's identity. 1Who's Behind That Action? The AI Agent Identity Crisis — Cloud Security Alliance / Aembit Only 18% of organizations determine an AI agent's access based on the agent's own permissions; the rest anchor access decisions in human context, predefined rules, or shared accounts. 1Who's Behind That Action? The AI Agent Identity Crisis — Cloud Security Alliance / Aembit

The consequence is predictable: 74% of respondents say AI agents frequently receive more access than necessary, and 79% say agents introduce new access pathways that are difficult to monitor. 1Who's Behind That Action? The AI Agent Identity Crisis — Cloud Security Alliance / Aembit When permissions are inherited rather than explicitly scoped, the principle of least privilege breaks down at machine speed.

Scope Violations Are Routine, Detection Is Not

The CSA/Zenity survey of 445 IT and security professionals quantifies the downstream effect. Fifty-three percent of organizations report that AI agents have exceeded their intended permissions, and only 8% say their agents never exceed scope. 2More Than Half of Organizations Experience AI Agent Scope Violations, Cloud Security Alliance / Zenity Survey Detection capability remains weak: just 16% expressed high confidence in identifying agent-specific threats. 2More Than Half of Organizations Experience AI Agent Scope Violations, Cloud Security Alliance / Zenity Survey

Shadow agents add another dimension. Fifty-four percent of organizations report 1-100 unsanctioned AI agents in their environment, with ownership frequently undefined. 2More Than Half of Organizations Experience AI Agent Scope Violations, Cloud Security Alliance / Zenity Survey When agents are invisible, governing their behavior is impossible.

Invisible Trust Chains Across SaaS

A separate CSA analysis by Dr. Tal Shapira, CTO of Reco, explains why traditional controls miss these interactions. When one AI agent calls another through tool-use patterns or MCP server connections, the trust relationship exists only at runtime. There is no consent screen, no persistent token, and no log entry capturing what was shared between agents - making these interactions invisible to CASBs, identity providers, and SIEMs. 3AI Agents Are Talking, Are You Listening? — Dr. Tal Shapira / Cloud Security Alliance

The security implication is severe: a compromised agent in a chain does not need to escalate privileges. It continues passing attacker-controlled context to the next agent, generating no detectable anomaly because agents accessing data at scale and interacting with multiple systems in rapid succession is normal behavior. 3AI Agents Are Talking, Are You Listening? — Dr. Tal Shapira / Cloud Security Alliance

The Converging Message

Read independently, each report highlights a different failure mode. Read together, they describe a single systemic weakness: enterprises have deployed autonomous agents into environments whose identity, authorization, and monitoring architectures were designed for human users and static integrations.

CSA's recommendations across all three publications align on core priorities: agents need their own identities, not borrowed ones; access should be per-task and short-lived; actions must be visible and attributable; and security teams need the ability to terminate agent chains in real time. 1Who's Behind That Action? The AI Agent Identity Crisis — Cloud Security Alliance / Aembit 3AI Agents Are Talking, Are You Listening? — Dr. Tal Shapira / Cloud Security Alliance

What Comes Next

Only 13% of organizations feel highly prepared for upcoming AI-related regulations, while 49% say they are slightly or not at all prepared. 2More Than Half of Organizations Experience AI Agent Scope Violations, Cloud Security Alliance / Zenity Survey With the EU AI Act's next enforcement phase arriving in August 2026, the gap between agent capability and governance maturity is narrowing from both directions - agents are getting more powerful while regulators are getting more specific. Organizations that treat agent identity as an afterthought are building technical debt that will compound with every new deployment.


Bild: Jyotirmoy Gupta / Unsplash

Sources

  1. Who's Behind That Action? The AI Agent Identity Crisis — Cloud Security Alliance / Aembit
  2. More Than Half of Organizations Experience AI Agent Scope Violations, Cloud Security Alliance / Zenity Survey
  3. AI Agents Are Talking, Are You Listening? — Dr. Tal Shapira / Cloud Security Alliance

Related dispatches

more from the desk