The Visibility Gap No One Designed For
Ask a security team who can access customer data, and the answer typically comes with receipts: OAuth scopes, API keys, audit logs. Ask which AI agents are exchanging that same data across Salesforce, Slack, and Google Drive, and the answer is far less clear.
A new Cloud Security Alliance analysis by Dr. Tal Shapira, CTO of Reco and member of CSA's AI Controls Security Working Group, frames the problem in structural terms. Agent-to-agent trust relationships form at runtime - when one agent calls another through a tool-use pattern or an MCP (Model Context Protocol) server connection - and disappear when the chain completes. There is no consent screen, no persistent token, and no log entry that captures what was shared between agents. 1AI Agents Are Talking, Are You Listening? — Cloud Security Alliance Existing controls - CASBs, identity providers, SIEMs - were not built to observe these interactions.
Gartner predicts that 40% of enterprise applications will include task-specific AI agents by the end of 2026, up from less than 5% in 2025. 1AI Agents Are Talking, Are You Listening? — Cloud Security Alliance The implicit trust chains connecting those agents are growing faster than security teams can map them.
Model Safety vs. Deployment Governance
CrowdStrike's announcement this week as a founding member of Anthropic's Project Glasswing draws a sharp line that echoes the CSA analysis. CrowdStrike characterizes the division of labor as: "Model safety is the builder's responsibility. Deployment governance is ours." 2CrowdStrike Founding Member of Anthropic Mythos Frontier Model to Secure AI — CrowdStrike Blog
Anthropic evaluates what a model can do - red-teaming, responsible scaling, capability gating. But when that model runs inside an enterprise with access to CRMs, databases, and thousands of unsanctioned deployments, the risk shifts to deployment governance: discovery, runtime visibility, data-flow enforcement, and access control. CrowdStrike reports discovering 1,800+ AI applications already running across customer environments, much of it deployed without security team approval. 2CrowdStrike Founding Member of Anthropic Mythos Frontier Model to Secure AI — CrowdStrike Blog
When AI Incidents Break the Playbook
Microsoft's security blog published a companion piece this week examining what changes when the incident you are responding to involves an AI system rather than a conventional application. 3Incident response for AI: Same fire, different fuel — Microsoft Security Blog
The core challenge: non-determinism. A traditional bug produces the same bad output from the same input. An AI model may produce harmful output today and benign output tomorrow from the identical prompt, because the root cause is a probability distribution shaped by training data and context, not a single line of code. Microsoft's IR team advocates a three-stage remediation model - "stop the bleed" within the first hour, "fan out and strengthen" over 24 hours, then "fix at the source" through classifier and model adjustments - because AI fixes cannot be verified the way a traditional patch can. 3Incident response for AI: Same fire, different fuel — Microsoft Security Blog
Traditional severity frameworks also struggle. A model generating inaccurate medical guidance is a fundamentally different severity than the same model producing inaccurate trivia - context about who is affected and how carries more weight than conventional security metrics alone.
The Questions That Matter Now
The CSA analysis distills the gap into five operational questions for security teams: Do you know what agents you have? Can you see what they connect to at runtime? Are you evaluating chain-level permissions - where a read-only agent connected to a public-facing channel becomes a data exposure vector? Can you see what tools agents are calling? And critically: can you kill an agent chain in real time? 1AI Agents Are Talking, Are You Listening? — Cloud Security Alliance
For most organizations, the answer to most of these is no. Microsoft's open-source Agent Governance Toolkit, released April 2, 2026, is the first toolkit designed to address all 10 OWASP agentic AI risks with deterministic, sub-millisecond policy enforcement. 4Microsoft Agent Governance Toolkit — GitHub / Microsoft The MAESTRO framework from CSA provides a structured threat-modeling approach for multi-agent environments. But enterprise adoption of both remains early.
Looking Ahead
The EU AI Act's next enforcement phase takes effect August 2, 2026, introducing mandatory automated audit trails, cybersecurity requirements for high-risk AI systems, and incident reporting obligations with penalties up to 3% of global revenue. 2CrowdStrike Founding Member of Anthropic Mythos Frontier Model to Secure AI — CrowdStrike Blog That deadline converts the authorization debate from a best-practice discussion into a compliance requirement. The organizations building agent interaction graphs and chain-level controls now will have a structural advantage. Those still relying on traditional IAM to govern autonomous agents will face both regulatory exposure and a threat model their tools were never designed to address.
Bild: towel.studio / Unsplash
