Enterprise DNA

Omni by Enterprise DNA

Enterprise DNA Resources

Latest AI and industry news. Practical AI operating-system thinking for owners, operators, and teams doing real work.

220k+

Data professionals

Omni

AI agents and apps

Audit

Map the manual work

News Trending Regulation

When AI Agents Go Wrong, Nobody Is Liable

The Register investigates the legal vacuum around enterprise AI agents. Vendors won't comment on liability. Insurers are carving out AI from policies.

Enterprise DNA | | via The Register
When AI Agents Go Wrong, Nobody Is Liable

There is a question nobody in enterprise software wants to answer right now. When an AI agent makes a bad decision that costs your business real money, who is responsible?

The Register asked that question directly to the major enterprise software vendors deploying AI agents at scale. Microsoft declined to comment. SAP declined to comment. Workday, Salesforce, ServiceNow, and Oracle did not respond.

That silence tells you something important about where enterprise AI agent liability actually stands in April 2026.

The Problem Is Built Into How Agents Work

Traditional software is deterministic. You can test it, certify it, guarantee its behaviour under defined conditions. Enterprise vendors have been doing this for decades. Software warranties and liability clauses exist because the software does the same thing every time.

AI agents are fundamentally different. They are non-deterministic by design. Give the same prompt to the same agent twice and you may get different outputs. That unpredictability is actually what makes them useful — they can reason, adapt, and handle situations that weren’t anticipated at build time. But it also makes the traditional contractual model unworkable.

As The Register’s investigation makes clear: providing a guarantee about something inherently unpredictable is a deeply uncomfortable contractual promise. So vendors are simply not making it.

The Stakes Are Getting Higher

This would be a manageable problem if AI agents were being deployed in low-stakes roles. But the enterprise AI market is not going that direction.

The largest enterprise application providers — including several of the vendors who declined to comment on liability — are actively marketing AI agents for decisions in HR, finance, and supply chain. We are talking about agents that evaluate employee performance, file regulatory documents, and manage inventory decisions.

The failure modes in those contexts are not minor. Hallucinated figures in a performance summary could expose a company to wrongful termination litigation. An incorrect regulatory filing could trigger penalties. A supply chain decision based on faulty agent reasoning could mean critical shortages.

Gartner has put a number on the broader risk. By mid-2026, new categories of unlawful AI-informed decision-making are projected to generate more than $10 billion in remediation costs across global AI vendors and enterprises. That figure, cited in The Register’s reporting, is a reasonable estimate given the scale and pace of deployment.

Insurance Is Not Filling the Gap

You might assume that business insurance would cover the downside. That is becoming a less safe assumption.

Insurance underwriters have been watching AI agent deployments accelerate and have concluded that the risk profile is not well understood enough to price. The response from much of the industry has been to try to carve out AI-related workflows from standard business liability policies and lobby state-level regulators for exclusions.

The result is a gap. Vendors will not accept liability. Insurance providers are pulling back from AI coverage. Businesses are deploying agents into high-stakes workflows and often do not realise that neither their software vendor nor their insurer is standing behind the decisions those agents make.

What This Means for Business

If you are deploying AI agents in your business, or evaluating a vendor who wants to sell them to you, these are the questions to ask before you sign anything.

What does the contract actually say about agent errors? Not in marketing language — in the legal language of the agreement. Look for indemnification clauses, liability caps, and what specifically is excluded.

What is your insurance position? Talk to your underwriter and ask directly whether AI agent outputs are covered under your current policies. Do not assume they are.

What decisions are agents actually authorised to make? There is a meaningful difference between an agent that recommends an action for a human to approve and one that executes autonomously. The liability question is easier to manage when humans remain in the loop on consequential decisions.

What audit trail do you have? If an agent makes a bad decision, you will need to reconstruct exactly what happened, what inputs the agent received, and what it did. Without that audit trail, defending a liability claim becomes very difficult.

The vendors’ silence on The Register’s question is telling. It means the liability question has not been resolved at the industry level. Until it is, the responsibility falls on the businesses doing the deploying.

That is not a reason to stop using AI agents. The productivity and operational leverage is real. But it is a reason to go into deployment with your eyes open about who is actually carrying the risk.


If you are building a case for AI agents inside your business and want to make sure you are deploying them sensibly, Omni Advisory offers strategic AI guidance for business leaders navigating exactly these questions.