How to Stop Buying the Wrong AI Tools
A practical 5-question framework for evaluating AI vendors before you sign. What demos hide, the red flags to watch, and when to build instead of buy.
There’s a new AI tool announced every single day. I’m not exaggerating. By the time you finish reading this post, at least a few more will have launched with promises of transformation, automation, and productivity gains you won’t believe.
Most of them will not help your business. Some of them will actively set you back. And the challenge is that from the outside, from the demo and the sales deck and the slick landing page, they all look more or less the same.
I’ve watched good businesses waste serious money on AI tools that never delivered. Not because the technology didn’t work, but because nobody asked the right questions before signing the contract.
This is the framework I use. Five questions. Ask them before you buy anything.
The demo problem
Before we get to the questions, let’s talk about why demos are so misleading.
AI demos are built to show you the best possible scenario. The data is clean and pre-loaded. The integrations all work perfectly. The edge cases that will consume 40% of your support tickets have been quietly omitted. The demo environment is nothing like your actual production environment.
This is not necessarily deceptive. It’s just the nature of demos. Nobody shows you their product failing gracefully under real-world conditions. You only find out about that after you’re three months in, already past the refund window.
The questions below are designed to get past the demo and into reality.
Question 1: Does it solve a problem you actually have today?
This sounds obvious. You’d think nobody buys software for a problem they don’t have. But I see it happen constantly.
An AI tool comes along that does something genuinely impressive. The team gets excited. It ends up on a company card. And then six months later, nobody can quite articulate what business problem it was supposed to solve.
Before you look at a single demo, write down the specific problem in one sentence. Not “we want to be more productive with AI.” Something like: “Our team spends 6 hours a week manually compiling data from three different systems into a report that our ops lead then re-formats in Excel.”
If the tool doesn’t solve that specific problem, it doesn’t matter how impressive it is. Move on.
And be honest with yourself about timing. Some AI tools solve real problems that your business will have in 18 months, but not today. Those are fine to watch. They’re not fine to pay for now.
Question 2: What does it need from you to actually work?
This is the question that vendors prefer you don’t ask clearly. And it’s the one that catches the most businesses out.
Every AI tool has inputs. Data it needs to access, systems it needs to integrate with, workflows it needs to be trained on. And in many cases, the setup work to get those inputs in place is substantial.
“Setup takes about a week” usually means setup takes about a month, assuming your data is reasonably clean. If your data isn’t clean, which is true of most businesses, setup is longer, and you may need to fix underlying data problems before the tool works at all.
Ask the vendor directly: what do I need to have in place before this tool does what you showed me in the demo? What are the integration requirements? What data quality assumptions does the tool make? Do those assumptions match my actual data?
If they struggle to answer these questions precisely, that’s a red flag. It means they’ve been selling the dream and haven’t spent enough time in the reality of customer onboarding.
Also ask: what does my team need to do differently for this to work? AI tools that require significant behavior change from your team have a much lower adoption rate than vendors suggest. Factor that in.
Question 3: What happens when it fails?
AI tools fail. The question is whether they fail gracefully or catastrophically.
What does the fallback look like when the AI gets it wrong? Is there a human review step, or does the wrong output go straight to your customer? Is there a way to catch errors before they cause damage?
What are the SLAs? If the tool goes down at 2pm on a Wednesday, what’s the recovery time? Who do you call? What’s the escalation path?
What does support look like? For many AI SaaS products, especially newer ones, support is a community forum and a chatbot. If you’re running a business-critical process through this tool, you need to know what real support looks like.
Ask them for two or three examples of how customers have handled failures or edge cases. If they can’t give you concrete examples, it’s a signal that they haven’t thought carefully about this, or that they don’t have many customers who’ve been through it yet.
The vendors who have genuinely mature products can tell you exactly what happens when things go wrong, because they’ve been through it with real customers.
Question 4: Can you measure ROI within 90 days?
If the vendor can’t help you define what success looks like and how you’d measure it within 90 days, be very cautious.
This isn’t about setting unrealistic expectations. Some AI implementations do take time to show results. But any serious vendor should be able to work with you to define leading indicators: metrics that tell you the tool is working before you see the full business impact.
If the pitch is “it’s hard to measure but you’ll definitely feel the difference,” walk away. That’s not a measurement problem. That’s a confidence problem. The vendor doesn’t know that the tool will work for your use case.
You should be able to answer: at 30 days, I’ll know it’s working if X. At 60 days, I’ll see Y. At 90 days, the business outcome I expect is Z.
If you can’t define those things before you start, you have no way to evaluate whether you should keep paying after the trial period.
Question 5: Does the vendor’s roadmap align with where your business is going?
You’re not just buying what the tool does today. You’re implicitly betting on where the vendor is going.
Ask them: what’s coming in the next 6 to 12 months? Where are they investing? What capabilities are customers most requesting?
Compare that against your own business trajectory. If you’re planning to scale internationally in 18 months, does the vendor have plans to support that? If you’re moving to a new CRM next year, is that on their integration roadmap?
This question also tells you something about the vendor’s self-awareness. A vendor who can speak clearly about their roadmap and the trade-offs they’re making has thought carefully about their product. A vendor who gives you a list of 40 upcoming features without any sense of priority or timeline is probably telling you what you want to hear.
Red flags to watch for
Beyond the five questions, a few patterns should make you slow down.
Case studies that don’t look like your business. If every case study they show you is a Fortune 500 company and you’re running a $10 million professional services firm, that’s relevant information. The implementation complexity, the data environment, and the support requirements are fundamentally different.
Reluctance to do a real proof of concept. A confident vendor will let you run a real pilot on your actual data, with your actual workflows. A vendor who insists on demoing only in their environment has something to hide.
The “AI-powered” modifier on everything. When every feature in the product has “AI-powered” in the description, it usually means nothing has genuine AI capability. Real AI applications are specific about what the AI does and doesn’t do.
Pricing that only makes sense at scale. Some AI tools price in a way that works out well if you’re processing millions of items per month but makes no sense for your actual volume. Do the math at your real scale before you commit.
No clear answer on data security. Any business handling customer data should ask exactly where that data goes, who can access it, and what the vendor’s security certifications are. Vague answers here are not acceptable.
The sunk cost trap
Here’s the pattern I see play out most often. A business buys an AI tool, spends three months trying to get it to work, doesn’t see results, but by that point they’ve already invested so much time and internal political capital in making it work that they can’t face walking away.
So they keep paying. They keep trying. And all of that time and energy could have been spent on something that actually works.
The three-month check-in is your safety valve. Go back to the metrics you defined in question four. If you’re not seeing the leading indicators you expected at 90 days, have a hard conversation. Either the implementation went wrong and there’s a specific, fixable reason, or the tool isn’t right for your use case and you need to move on.
Walking away from a sunk cost is hard. But it’s always cheaper than continuing to invest in the wrong thing.
When to build instead of buy
For most businesses, buying is the right starting point. Off-the-shelf AI tools are faster to deploy, cheaper to maintain, and you get the benefit of all the development work the vendor has already done.
But there are situations where custom build makes more sense.
If your use case is genuinely unique and no existing tool addresses it well, building gives you a competitive advantage that you can’t get from a product everyone else can also buy.
If you’re dealing with sensitive data that you can’t send to a third-party vendor’s infrastructure, a self-hosted or custom solution is the right answer.
If you’ve outgrown several off-the-shelf tools and they all break in the same place, that’s a signal that the category doesn’t have a good solution for your specific problem yet.
But be honest with yourself about whether the use case is truly unique, or whether you just haven’t found the right existing tool yet. Custom builds are expensive, take longer than expected, and require ongoing maintenance. They’re the right answer in specific situations, not a default.
Related reading: Why most businesses aren’t ready for AI agents yet and why you need a fractional AI advisor instead of guessing your way through vendor decisions alone.
If you want help evaluating what you’re looking at, or if you’re sitting on a shortlist of tools and want a second opinion before you commit, that’s exactly what Omni Advisory is for.
We do this across dozens of businesses and categories. We know what questions to ask and where vendors tend to obscure the truth. An hour of unbiased evaluation can save you months of the wrong investment.