Skip to main content

When to Hire an AI Consultant — And When You're Wasting Money

A mid-market logistics company called us last September. They’d spent $340,000 with an AI consultancy over six months. The deliverable: a 94-page “AI Readiness Assessment” recommending they “establish a data governance framework” and “identify high-impact use cases.” The company’s CEO dropped the report on our conference table and asked, genuinely, what they’d paid for.

We couldn’t give him a good answer.

The AI consulting market is flooded with generalists repackaging the same frameworks, the same maturity models, the same recommendations. They charge discovery fees to tell you things your operations team already knows, then propose implementation roadmaps that never get executed because nobody on staff can translate strategy-speak into working systems.

But here’s the thing: good AI consultants exist, and when you find one, the ROI is extraordinary. The difference between the two isn’t subtle. You just need to know what to look for.

The Three Legitimate Reasons to Hire Outside Help

Not every AI initiative needs a consultant. Most don’t. But there are specific situations where bringing in outside expertise makes hard financial sense.

Reason one: you need someone who’s done the exact thing before. Your accounts payable team processes 4,000 invoices a month with a 6% error rate, and you want to automate the extraction and matching workflow. You don’t need a strategy consultant. You need someone who’s built invoice automation for three other companies your size and can tell you which OCR provider actually works for handwritten POs, which integration points will break, and what the realistic timeline looks like.

Specificity matters here. “AI implementation experience” means nothing. You want someone who can describe, in detail, the failure modes of the exact system you’re trying to build. If they can’t name the three things that went wrong on their last similar project and how they fixed them, they haven’t actually done it.

Reason two: your team has the skills but not the bandwidth. This is more common than people admit. Your engineering team could absolutely build the thing, but they’re already committed to product roadmap items that generate revenue. Hiring a contractor or small consultancy to handle a well-scoped AI project frees your team to keep shipping while the new capability gets built in parallel.

The key phrase is “well-scoped.” If you can’t write a clear project brief with specific inputs, outputs, success metrics, and a timeline, you’re not ready to hire anyone. You’re still in the figuring-it-out phase, and paying consultant rates for that phase is almost always a mistake.

Reason three: you need an honest broker for vendor selection. The AI vendor market is a mess. Every enterprise SaaS company now claims AI capabilities, most of which amount to a ChatGPT wrapper bolted onto their existing product. If you’re evaluating multiple platforms for a major deployment, an independent consultant who’s seen the internals of these tools across multiple client engagements can save you from a six-figure mistake.

But “independent” is the operative word. Many consultancies have referral agreements, implementation partnerships, or revenue-sharing arrangements with the vendors they recommend. If your consultant can’t produce a written conflict-of-interest disclosure, walk away.

The Red Flags That Scream “Waste of Money”

We’ve reviewed over a hundred AI consulting engagements across our portfolio companies and advisory clients. The patterns of wasted spend are remarkably consistent.

The discovery phase that never ends. A legitimate discovery engagement takes two to four weeks. The consultant interviews key stakeholders, maps existing workflows, identifies data sources, and produces a concrete plan with cost estimates and timelines. If a consultancy proposes a three-month discovery phase, they’re either padding the engagement or they don’t know what they’re looking for. Both are disqualifying.

The maturity model trap. If the first deliverable is a chart showing where you sit on a five-stage “AI Maturity Model” with recommendations to “progress from Stage 2 to Stage 3,” you’ve hired a firm that sells frameworks, not solutions. Maturity models exist so consultants can always identify a gap. There’s always a next stage. The assessment becomes the product, and the actual implementation keeps getting deferred.

Vendor-agnostic as a selling point. This sounds counterintuitive, but consultants who market themselves as “vendor-agnostic” are often the least useful. The best consultants have strong opinions about specific tools for specific problems because they’ve used them in production. A consultant who won’t recommend a specific OCR provider, a specific vector database, or a specific orchestration framework is either inexperienced or afraid to be held accountable for a recommendation.

Hourly billing on open-ended scopes. This is the structure that generates the worst outcomes. The consultant has zero incentive to finish quickly and every incentive to expand the scope. Fixed-fee engagements with clearly defined deliverables and success criteria aren’t just better financially. They force the consultant to think hard about what’s actually needed before they start the meter.

The “Center of Excellence” recommendation. If a consultant’s primary recommendation is that you create an internal “AI Center of Excellence” (staffed, conveniently, by their own people on a retainer basis), you’re looking at an annuity play, not a solution. Real consultants work themselves out of a job. They build the thing, transfer knowledge to your team, and leave.

How to Structure an Engagement That Actually Delivers

The companies we’ve seen get real value from AI consultants share a common approach to structuring the relationship.

Start with a paid pilot, not a strategy engagement. Instead of paying someone $150,000 to tell you what to build, pay them $40,000 to build one thing. Pick the smallest, most contained AI use case in your operation and have the consultant deliver a working system in four to six weeks. You’ll learn more about their competence from a single working prototype than from any number of strategy presentations.

We worked with a consumer products company that used this approach. They asked three consultancies to each build a working product categorization model using a sample dataset of 5,000 SKUs. Two of the three couldn’t deliver a functioning model in the allotted time. The third delivered one that performed at 91% accuracy with clear documentation on how to retrain it. That’s the consultant you hire for the full engagement.

Define success metrics before the engagement starts. Not vague metrics like “improved efficiency” or “enhanced decision-making.” Actual numbers. “Reduce invoice processing time from 12 minutes to under 3 minutes per invoice.” “Achieve 85% accuracy on first-pass document classification.” “Cut customer response time from 4 hours to under 30 minutes for Tier 1 tickets.”

If the consultant pushes back on specific metrics, that tells you something. Either they don’t believe they can deliver measurable results, or they’ve never measured results in a previous engagement. Neither is acceptable for the fees they’re charging.

Require working code as a deliverable. Strategy documents, architecture diagrams, and roadmap slides are supplementary materials. The primary deliverable should be a functioning system that your team can operate, modify, and extend after the engagement ends. If the consultant’s deliverable is a PDF, you’ve bought advice, not a solution.

This also protects you from the dependency trap. Consultants who deliver working, well-documented systems can leave. Consultants who deliver proprietary frameworks and black-box solutions need to stay. Guess which model generates more recurring revenue for the consultancy.

Build knowledge transfer into the contract. Every week of the engagement should include paired work sessions where your internal team works alongside the consultant. By the end of the project, your team should understand every component of the system well enough to maintain it independently. If the consultant objects to this structure, they’re protecting their recurring revenue, not your interests.

The Internal Alternative Most Companies Ignore

Before you hire anyone, consider whether you actually need outside help at all.

The barrier to building useful AI systems has dropped faster than most executives realize. Tools like retrieval-augmented generation frameworks, no-code automation platforms, and pre-trained models accessible through APIs have made it possible for technically competent (not expert) internal teams to build systems that would have required specialized consultants two years ago.

We’ve seen operations managers with no formal engineering background build document processing workflows that handle 80% of their team’s manual classification work. They used off-the-shelf tools, spent a few weekends learning the basics, and iterated until the system worked. Total cost: about $2,000 in API credits and their own time.

The 80/20 rule applies aggressively here. If you need a system that works correctly 80% of the time and flags the remaining 20% for human review, an internal team can probably build that. If you need 98% accuracy on a mission-critical process, you probably need outside expertise. Know which problem you’re solving before you start writing checks.

Upskilling is cheaper than outsourcing. Sending two members of your operations team to a focused, hands-on AI implementation course (not a theoretical overview, but a build-something course) costs $5,000 to $10,000. Those people come back with enough knowledge to evaluate vendors, scope projects, and build simple systems. More importantly, they understand your business context in ways no outside consultant ever will.

The Consultant Interview That Separates Real From Fake

When you’ve decided you genuinely need outside help, here’s how we recommend evaluating candidates.

Ask them to describe their last three failures. Not the projects that went well. The ones that went badly. A consultant who can’t articulate specific failures either hasn’t done enough work to fail meaningfully or isn’t honest enough to be useful. The best consultants will tell you exactly what went wrong, what they learned, and what they’d do differently.

Request references from projects that ended more than 12 months ago. Anyone can get a reference from a current client who’s still in the honeymoon phase. You want to talk to someone who’s been living with the system the consultant built for a year or more. Ask the reference: “Is the system still running? Has the team modified it? Did the consultant’s architecture hold up?”

Give them a real problem and watch them think. Don’t accept a capabilities presentation. Describe an actual operational problem you’re facing and ask them, on the spot, how they’d approach it. You’re not looking for a complete solution. You’re looking for the right questions. A good consultant’s first response to a problem should be a series of clarifying questions about data volume, error tolerance, integration requirements, and existing workflows. If their first response is a solution, they’re selling, not thinking.

Ask about their team’s actual composition. Many consultancies staff senior partners for the sales process and junior analysts for the actual work. Ask who, specifically, will be working on your project, what their background is, and how many hours per week they’ll dedicate to your engagement. Get it in writing.

What Good Looks Like

We’ll end with what we’ve seen when it works.

A financial services firm we advise hired a two-person AI consultancy to automate their compliance document review process. The engagement was eight weeks, fixed fee, with a clear deliverable: a system that could flag potential compliance issues in client onboarding documents with at least 90% recall.

The consultants spent the first week sitting with the compliance team, watching them work, and cataloging every type of issue they flagged manually. Week two, they built a prototype using the actual document corpus. Weeks three through six were iterative improvement, with the compliance team testing each version and providing feedback. Weeks seven and eight were documentation, knowledge transfer, and handoff.

Total cost: $65,000. The system now handles first-pass review on 70% of incoming documents, cutting the team’s manual review time by roughly half. The compliance team maintains and improves the system themselves. They haven’t called the consultant in eight months.

That’s the engagement you’re looking for. Specific, time-bounded, measurable, and designed from day one to end.

Stop paying for advice. Pay for outcomes. And if you can build it yourself, do that instead.

Considering AI for your business?

We help companies cut through vendor noise and build AI capabilities that actually work. No pilots that go nowhere, no slides that promise everything.

Talk to us