The Governance Problem Nobody's Solving
Traditional IAM was built for humans clicking menus. AI agents don't log in, don't go home, and 78% of organizations can't say who they are.
A procurement agent at a mid-market manufacturer had read access to the vendor database, write access to the purchase order system, and approval authority up to $50,000. It authenticated with a service account that three other systems also used. When attackers compromised the agent through a supply chain vulnerability in the model provider, it started approving orders from shell companies. The company didn’t detect the fraud until inventory counts collapsed. By then, $3.2 million in fraudulent orders had been processed, under credentials nobody was monitoring, through an identity nobody owned.
The agent had passed every security review. It was running inside the firewall, on approved infrastructure, doing exactly what it was designed to do. The problem was that “what it was designed to do” included holding persistent credentials with broad permissions, and no system in the stack was watching how those credentials were being used.
“The IAM system that manages 10,000 human users was never built to manage 800,000 machine identities."
Built for a different species
Identity and access management was designed for humans. A person joins the company, gets a role, receives permissions tied to that role, authenticates with a password or a badge, and works roughly eight hours a day on one device. When they leave, their access gets revoked. The lifecycle is measured in months or years. The behavior is predictable.
AI agents don’t work like that. An agent might exist for thirty seconds to complete a single task, or it might run continuously for months. It might spawn sub-agents that inherit its permissions, or it might request new permissions dynamically based on what it encounters mid-task. It operates at machine speed, across dozens of systems simultaneously, at 3am, with no one watching.
A Cloud Security Alliance survey found that only 18% of security leaders are confident their IAM systems can manage agent identities. The other 82% are running agents on infrastructure that was designed to track people logging in and clicking menus.
The scale nobody planned for
Non-human identities already outnumber human identities in the average enterprise by ratios ranging from 45:1 to over 100:1. Service accounts, API keys, automation tokens, machine credentials: these were accumulating long before AI agents arrived. Agents accelerate the problem by orders of magnitude because each agent can create its own credentials, request its own permissions, and spawn additional identities as part of its workflow.
The IAM system that manages 10,000 human users was never built to manage 800,000 machine identities. Adding a few hundred AI agents, each of which might generate dozens of ephemeral sub-identities per day, pushes the system past its design assumptions entirely.
And the governance around these identities is almost nonexistent. 78% of organizations don’t have formal policies for creating or removing AI identities. Agents accumulate standing access because revoking and reapproving permissions slows the team down, and that access rarely expires.
What’s actually in production
The CSA survey found that 44% of organizations authenticate their AI agents with static API keys. Another 43% use username and password combinations. 35% rely on shared service accounts.
These are persistent, often unmonitored credentials attached to autonomous systems that operate around the clock across multiple platforms. A static API key for an agent that runs 24/7 is broad, unmonitored access that never expires.
Only 28% of organizations can trace an agent’s actions back to a human sponsor across all environments. Only 21% maintain a real-time inventory of active agents. Nearly 80% of organizations deploying autonomous AI cannot tell you, in real time, what those agents are doing or who is responsible for them.
When the credentials get loose
The manufacturing procurement fraud was expensive but contained. Larger exposures have followed the same pattern.
OpenClaw, the open-source AI agent framework that went viral in early 2026, exposed over 21,000 instances leaking API keys, chat histories, and credentials within weeks of its adoption spike. Cisco’s security team found a third-party skill performing data exfiltration and prompt injection without user awareness. Simon Willison identified the core vulnerability pattern: an agent with access to private data, exposure to untrusted content, and the ability to communicate externally. When those three combine, an attacker can trick the agent into reading private information and sending it outward. No alert raised.
In every case, the agent held credentials that were too broad, too persistent, and too invisible. The systems that issued those credentials were designed for a world where the credential holder was a person who would notice something wrong.
“The technical gaps are solvable. The organizational gaps are harder.”
What IAM for agents actually requires
The fix isn’t a new product category. It’s applying identity principles that the security industry already understands to a class of user it hasn’t accounted for.
Agent-specific identity. Every agent gets its own identity, distinct from the human who deployed it and from other agents in the same workflow. No shared service accounts. No inherited credentials. The identity is tied to a specific purpose, a specific scope, and a specific human sponsor who is accountable for what it does.
Just-in-time permissions. Agents should receive the minimum permissions needed for a specific task, issued at the moment the task begins, and revoked when the task completes. Standing access for an autonomous system operating at machine speed is a vulnerability, not a convenience. The CSA research found that organizations increasing identity budgets for agent governance, 40% of respondents, are investing primarily in dynamic credential issuance.
Traceability. Every action an agent takes should map back to the human sponsor who authorized it. The previous article in this series covered evidence memory: tracking not just what an agent decided, but why. Identity traceability is the prerequisite. If you can’t identify which agent took an action, the decision record has no anchor.
Lifecycle management. Agents should be provisioned, monitored, and decommissioned with the same rigor as human accounts. When an agent’s purpose expires, its identity expires with it. When a model provider pushes an update that changes an agent’s behavior, the agent’s permissions should be re-evaluated before it resumes production work.
The organizational problem underneath
The technical gaps are solvable. The organizational gaps are harder.
Governance ownership for AI agents is fragmented across security teams (39%), IT (32%), and AI functions (13%). No single function owns the problem. Security teams understand credentials but not agent workflows. AI teams understand workflows but not identity infrastructure. IT manages the IAM system but didn’t design it for this use case.
Banking has a precedent worth studying. SR 11-7, the Federal Reserve’s model risk management guidance, has required named ownership, continuous monitoring, explainability, and independent challenge for traditional models for years. The principles aren’t new. The application to AI agents is.
The organizations solving this are assigning a single function to own agent identity governance, with authority that spans security, AI, and IT. The ones not solving it are writing policy documents that sit in SharePoint while their agents authenticate with API keys that haven’t been rotated since the pilot.
The asset and the lock
The previous article argued that your proprietary data is your biggest AI advantage. That’s true. It’s also true that an unmanaged agent with broad credentials is a direct path to losing that advantage.
The same access that makes an agent useful, the ability to reach across systems, read proprietary data, and take actions, is exactly what makes a compromised agent catastrophic. Every organization deploying AI agents in production is making an implicit bet that the identity infrastructure underneath can handle a class of user it was never designed for. For 82% of them, that bet is wrong.
Traditional IAM was built for people who log in at 9am and go home at 5pm. Agents don’t log in. They don’t go home. And 78% of the organizations running them can’t say who they are or what they’re doing.
Bill Sourour is the founder of Arcnovus, a technology advisory firm that helps enterprise leaders govern AI agents with the same rigor they apply to the people those agents work alongside. If your agents are running on credentials nobody’s watching, let’s talk.


