A new study from identity security company Akeyless has found that two thirds of organisations using AI agents suspect those agents have already accessed data beyond their intended scope. The research, based on a survey of 400 IT and security leaders across the US and UK, surfaces a problem that is growing quietly inside many businesses as they accelerate AI deployment.
The numbers behind this finding are not abstract. The average time to detect a compromised AI agent is 14 hours. Once detected, remediation takes nearly a week. Only 7% of organisations believe their current controls would actually prevent a compromised agent from operating. And the average annual cost of responding to AI agent identity and security incidents has already crossed $1 million.
Why AI Agents Create a New Kind of Security Problem
Traditional security thinking is built around human users. You verify who someone is, grant them appropriate access, and monitor what they do. AI agents disrupt this model in a specific way: they are granted access to systems and data, often with broad permissions, and then they operate autonomously across multiple contexts in ways that are difficult to monitor in real time.
As Akeyless CEO Oded Hareven put it: “AI agents are not breaking in. They are being invited in with real credentials and broad access. What this research shows is that most organisations do not yet have a clear picture of how those agents behave once deployed. The risk is not unauthorised access. It is authorised access that is not controlled in real time.”
This distinction matters. When a human employee accesses data they should not, there is usually an intent decision involved. When an AI agent does it, it is often the result of an overly permissive credential, an edge case in the agent’s instructions, or a task that required access the system was technically capable of providing but should not have been. The agent is not malicious. It is just doing what it was set up to do.
The Credential Problem
The Akeyless research highlights a specific technical pattern at the root of this issue: widespread reliance on persistent credentials, things like API keys and static secrets that are often embedded directly in code or workflows. These credentials frequently carry broader permissions than any individual task actually requires. And more than four in five organisations in the study say that a single compromised credential could affect multiple major systems.
This is not a new problem. Developers and security teams have been wrestling with secrets management for years. But AI agents make it significantly worse, because an agent might access dozens of systems in the course of executing a single workflow, using credentials that were provisioned once and never reviewed again.
Investment Is Running Ahead of Controls
The study found that 75% of organisations believe AI adoption would accelerate if these risks were better managed. Which means the security deficit is not just a liability, it is an active brake on how quickly businesses can scale their AI operations.
This dynamic is familiar from earlier waves of enterprise technology. Cloud adoption ran well ahead of cloud security maturity. Mobile rollouts preceded endpoint management frameworks. AI agents appear to be following the same arc: deployment velocity is outpacing the operational and governance infrastructure needed to support it.
The businesses that get ahead of this problem will have a structural advantage. They will be able to move faster with less risk, and they will avoid the expensive remediation cycles that are already consuming seven-figure budgets at organisations that did not plan for this phase.
What This Means for Business
For businesses currently deploying or planning to deploy AI agents, the Akeyless findings point to a specific set of questions worth answering before you scale:
Do you know what data your agents can access? Not in theory, but in practice, based on what credentials they hold and what permissions those credentials carry? Many organisations discover the answer to this question for the first time during an incident.
How quickly would you detect a problem? If your average detection time is measured in hours, your remediation window is narrow. If your answer is “we rely on endpoint monitoring and quarterly reviews,” that is not a detection strategy for a system that operates continuously and autonomously.
Are your credentials scoped to the task? Agent security works best when agents receive only the access they need for a specific operation, not standing access to everything they might conceivably need. This requires an architecture decision, not just a policy decision.
Who owns agent security in your organisation? This is not clearly answered in many businesses. It sits in a gap between IT, security, and the teams deploying agents. That ambiguity is where incidents develop.
None of these questions require you to slow down your AI deployment. They require you to run it with the same operational rigour you would apply to any system that touches critical data and customer information.
For businesses working through these questions, Omni Advisory offers a practical entry point: an outside perspective on your current AI architecture, governance gaps, and the sequencing decisions that determine whether your agent deployments scale safely or spend next year’s budget on incident response.
Source
PRNewswire / Akeyless