Enterprise DNA

Omni by Enterprise DNA

Enterprise DNA Resources

Insights on data, AI & business. Practical AI operating-system thinking for owners, operators, and teams doing real work.

220k+

Data professionals

Omni

AI agents and apps

Audit

Map the manual work

How to Know If Your AI Is Actually Working
Blog AI

How to Know If Your AI Is Actually Working

Most businesses measure AI with the wrong metrics. A practical framework for real AI ROI measurement and spotting AI theater before it costs you more.

Sam McKay

Six months into an AI project, I asked a business owner how things were going.

“Great,” he said. “Our team is using it constantly.”

So I asked a follow-up: “What’s actually different in your business compared to before?”

Long pause.

The honest answer was: not much. People were using the tool. The vendor was showing dashboards with usage numbers that looked impressive. But the actual business outcomes — revenue, capacity, customer experience, error rates — had not moved.

This is AI theater. And it is more common than most business leaders want to admit.

I have spent years building Enterprise DNA and working directly with businesses through Omni on AI implementation. The pattern I keep seeing is that businesses invest in AI, measure the wrong things, declare success based on activity rather than outcomes, and then wonder why the ROI never shows up on the bottom line.

Here is how to actually know if your AI is working.

The problem with “adoption” as a metric

When AI vendors pitch you, they talk about adoption rates. Seat count. Queries per day. Active users. These numbers look great in quarterly reviews and are genuinely easy to track.

They do not tell you whether your business is better.

Adoption is an input metric, not an outcome metric. Measuring adoption to assess AI ROI is like measuring how often your team attends meetings to assess whether your strategy is working. People can attend every meeting and still produce nothing valuable.

The metrics that actually tell you whether AI is working are harder to measure, which is exactly why most businesses do not measure them.

Four questions that reveal the truth

1. What would have taken longer without it?

This is the most basic ROI question. Before implementing AI, pick three to five processes where you expect it to make a difference. Time those processes. Document the output quality. Set a baseline.

Six months later, time them again. Compare.

If you cannot answer this question with real numbers, you do not have an AI ROI story. You have an AI adoption story, which is not the same thing.

I am constantly surprised how few businesses do this. They implement AI without measuring the before state, which means they can never prove the after state is better. Instinct takes over. People feel like things are faster. Maybe they are. But feeling is not measurement.

2. What are you doing now that you could not do before?

Sometimes AI ROI is not about efficiency. It is about new capability.

A business that deploys a voice AI agent can now answer customer calls at 2am without paying anyone overtime. That is not a time saving on an existing process. That is an entirely new capability that was economically impossible before.

A business that deploys a data AI agent can now run weekly reports across six different data systems automatically. Previously they either did not run those reports (because nobody had time) or they paid a contractor to spend four hours doing it manually.

Capability expansion is harder to put a dollar figure on but often represents the largest actual value from AI. If you are only measuring efficiency gains, you are undercounting.

3. Where is the error rate now compared to before?

This is the most underused measurement in AI implementation.

Humans make errors. Consistently, predictably, and often invisibly. Data entry errors. Missed follow-ups. Incorrect calculations. Misrouted customer requests. The costs from these errors are real but often buried in rework, customer churn, or just accepted as the cost of doing business.

AI systems, when properly implemented, reduce error rates substantially in well-defined tasks. If you are not measuring error rates before and after implementation, you are not capturing this value.

In one business we worked with through Omni Ops, we found the biggest measurable ROI from AI was not in time savings. It was in catching data errors before they reached customers. The financial impact of those prevented errors was significantly larger than the efficiency gain from automation.

4. What did your team do with the time they got back?

This question is where most AI ROI stories either get validated or fall apart.

If your AI system genuinely saves your team 20 hours a week across three people, that is 60 hours per week of recovered capacity. The question is whether that capacity is being used for something valuable.

In too many businesses, recovered AI capacity just gets absorbed into existing work patterns. People work slightly less intensely. Deadlines stretch a little less. But the business is not actually doing anything different with the extra capacity.

In businesses where AI delivers real ROI, the freed capacity is deliberately redirected. New products get developed. Existing customers get more attention. New markets get explored. The AI effectively expanded what the business can do, not just how efficiently it does what it already does.

If you cannot point to a specific business initiative that your AI freed up capacity for, your ROI story is incomplete.

The signs you are running AI theater

Sign 1: Your measurement is vendor-supplied.

If the only numbers you have on AI performance came from your vendor’s dashboard, you have an information problem. Vendors measure what makes them look good. That is not deception, it is incentive design. Build your own tracking, independent of the vendor.

Sign 2: Everyone is enthusiastic, but outcomes are vague.

“The team loves it” and “we are definitely more efficient” are not ROI evidence. They are vibes. Vibes are fine, but you should be asking for numbers alongside them.

Sign 3: You cannot name a specific decision that changed because of AI.

If your AI system is not influencing actual decisions — pricing, inventory, staffing, customer outreach — it is probably a productivity accessory at best. The highest-value AI deployments change what decisions get made and how fast.

Sign 4: Your baseline data is a mess.

This is a structural problem, not an AI problem. If your data was poorly organized, incomplete, or inconsistent before AI, the AI outputs will reflect those problems. “Garbage in, garbage out” is cliche because it is true. If you are measuring AI performance and the numbers look wrong, the issue may be data quality, not model quality.

This is one of the reasons we built EDNA Learn. Data literacy is not just about using tools. It is about being able to evaluate whether the outputs you are getting are trustworthy. Businesses whose teams can interrogate AI output critically are far better positioned to get real value from AI than those who accept every output at face value.

Building a proper measurement framework

Start simple. You do not need a complex analytics infrastructure to measure AI ROI properly.

Pick one process. Document the current state in three dimensions: time taken, error rate, and output volume. Implement AI for that process. Wait 60 days. Measure the same three dimensions.

If all three improved, dig into why and document it. That becomes your template for the next process.

If one of them got worse (often error rate does initially, until the AI is properly tuned), treat that as a data point and fix it before expanding. Do not hide it in the adoption dashboard.

Scale gradually. Most AI failures come from companies that tried to automate everything at once, got inconsistent results, and could not figure out which piece was broken.

The businesses that build consistent AI ROI are the ones that measure rigorously at small scale before expanding. They accumulate evidence rather than assumptions. The data on what separates those businesses from everyone else is stark — our analysis of why 80% of companies see no real ROI from AI details the three specific habits the top 20% share.

The data readiness question

Before any serious AI implementation, I always ask about data readiness. Not in a technical way. Just these questions:

  • Do you know where your important business data lives?
  • Is it in one place or scattered across ten different systems?
  • Do your team members agree on what the key numbers in your business actually mean?
  • When two people pull the same report, do they get the same numbers?

If the answer to any of these is “not really,” the AI project has a data problem underneath it. You can implement AI anyway, but the measurement challenge will be harder, and the ROI will be lower than it should be.

This is not a reason to delay AI indefinitely. It is a reason to invest in data foundations at the same time as AI tools. The businesses that do both are the ones that end up with measurable, defensible AI ROI stories twelve months later.

What to do if your AI is not working

If you have been through this framework and the honest answer is that your current AI investment is not producing measurable outcomes, that is useful information.

Do not extend the contract. Do not add more seats. Pause and diagnose.

Usually the problem falls into one of three categories. The wrong problem was automated (there was a more valuable use case that nobody prioritized). The data foundation was not ready (outputs are unreliable because inputs were unreliable). Or the change management failed (the AI was implemented without changing how the team actually works).

Each of these has a fix. None of them are fixed by spending more money on the same AI tool that is not delivering.

If you want a clear-eyed assessment of where your AI investment actually stands, a conversation with the Omni Advisory team is a good starting point. We have done this diagnosis enough times to spot the pattern quickly.

And if the issue is data skills — your team not being equipped to build baselines, evaluate outputs, or interrogate AI results — that is exactly what EDNA Learn is built for.

AI theater looks convincing from the outside. But the businesses that build real AI capability are the ones willing to measure honestly, even when the results are uncomfortable.

That discipline is what separates AI that changes a business from AI that just changes the talking points in board presentations.

Related reading: What we tell every business owner before investing in AI, how to stop buying the wrong AI tools, 3 AI investments that pay off in year one, and a real account of replacing three internal workflows with AI agents.