Ping Identity has published new research warning that enterprises are deploying AI agents faster than their identity and access management systems can govern them, creating a class of authorisation risk that traditional security models were not designed to address.
The report, From AI Agents to Trusted Digital Workers, was conducted by independent analyst firm KuppingerCole Analysts and commissioned by Ping Identity. It identifies a structural failure mode in enterprise identity systems as AI agents operate at runtime — making decisions and executing actions across systems without the human checkpoints that conventional access controls assume.
Why existing identity models fall short for AI agents
Established identity frameworks such as OAuth and OIDC were built around human decision-makers. When AI agents operate autonomously — spawning sub-agents, inheriting permissions, and acting across multiple enterprise systems — these frameworks provide access but do not enforce control at the moment an action occurs.
The research identifies several compounding risks: delegation chains that become untraceable, context leakage across systems without continuous re-evaluation, and ambiguity around permission inheritance and liability in agent-to-agent interactions.
“Enterprises are deploying autonomous AI faster than they can govern it. Identity remains foundational, but in an agentic environment it must operate continuously. Control must be enforced at the moment an action occurs.” — Andre Durand, CEO and Founder, Ping Identity
The scale of the gap
The report draws on IBM’s 2025 Cost of a Data Breach findings to quantify the exposure: 13% of organisations have already experienced AI-related security breaches, while 97% lack adequate access controls for AI systems. KuppingerCole notes that AI agents are already interacting across enterprise identity systems, even as most IAM approaches remain centred on human users and static access decisions.
Recent enterprise incidents — including data leaks and prompt injection attacks — illustrate how gaps in AI governance are being exploited in production environments today.
A runtime governance framework
KuppingerCole’s independent blueprint for governing autonomous AI is built around four pillars: identity, policy-based authorisation, governance and oversight, and accountability. The framework extends zero trust principles to support continuous, runtime authorisation rather than one-time access grants.
“As autonomous agents become more prevalent, organisations will need to extend identity and authorisation models to maintain control, accountability, and trust across increasingly dynamic environments.” — Martin Kuppinger, Founder, KuppingerCole Analysts
Ping Identity’s Identity for AI capabilities — covering runtime identity verification, policy-based access, and governance controls — are designed to align with this framework. The company has been recognised as an Overall Leader in multiple KuppingerCole Leadership Compass reports, including Customer IAM and B2B identity.
The full report is available via the Ping Identity website.



Share your thoughts