← All Articles What the Research Actually Shows About Broken SMB AI Strategies
Eric Pharr, Founder April 15, 2026 6 min read

What the Research Actually Shows About Broken SMB AI Strategies

The noise around AI adoption makes it easy to believe everyone is winning. The research says otherwise. Here is what three major studies published in the last year actually found, and what the pattern looks like when you stack them side by side.

The Three Numbers That Should Stop You

Number one: 95% zero impact. MIT's Project NANDA report, published in 2025, found that 95% of organizations that deployed generative AI saw zero measurable P&L impact. The study was grounded in 150 leader interviews, a survey of 350 employees, and analysis of 300 public AI deployments. Not anecdote. Not opinion. Three hundred deployments, ninety-five percent nothing.

Number two: only 28% succeed. Gartner's survey of 782 infrastructure and operations leaders in November and December 2025 found only 28% of AI use cases fully succeed and meet ROI expectations, while 20% fail outright. That leaves more than half sitting in a zombie middle, consuming budget and producing no clear outcome.

Number three: $547 billion wasted. Pertama Partners' 2026 analysis found that enterprises invested $684 billion in AI initiatives in 2025, and over $547 billion, more than 80%, failed to deliver intended business value. The wasted capital in one year exceeds the annual GDP of Sweden.

Stack those three numbers. The story is consistent. Most AI deployments do not work.

What "Broken" Actually Means

When a study calls a project a failure, it usually means one of four things. The research identifies these patterns consistently.

Pattern one: the pilot that never leaves the pilot. Gartner found that only 48% of AI projects make it past pilot, and at least 30% of generative AI projects will be abandoned after proof of concept by the end of 2025. The team runs a demo, the demo impresses, budget gets spent, then the project disappears because nobody scoped how it would actually live in the operation.

Pattern two: adoption without impact. McKinsey data shows 88% of organizations now use AI in at least one function, but only 39% see any EBIT impact, and over 80% report no meaningful impact on enterprise-wide EBIT despite adoption. Employees are using the tools. The company is paying for the tools. The P&L is unchanged.

Pattern three: missing data foundation. Almost every failure analysis lands on the same culprit. The organization launched AI on top of data that was not ready. The models produced output that could not be trusted because the inputs were inconsistent, incomplete, or wrong. Gartner and MIT both flag data readiness as the single most common failure root cause.

Pattern four: no defined outcome. The project was approved because leadership wanted "to be doing AI," not because a specific problem had been scoped. Without a defined success metric, there is no way to evaluate ROI, no way to iterate, and no way to justify the next round of investment.

Why SMBs Amplify Every One of These

Most of the research above focuses on enterprises. SMBs face the same failure patterns and usually worse, for three structural reasons.

Smaller margin for error. An enterprise can burn $10M on a failed AI initiative and absorb it. A $5M SMB cannot burn $200K on a failed deployment without consequences. The downside of picking wrong is proportionally much larger.

Less in-house expertise. SBA Office of Advocacy research from September 2025 found SMBs are closing the adoption gap but still typically lack dedicated AI talent. That means the person picking the tools, integrating them, and measuring their impact is usually someone whose primary job is something else.

Barriers compound. A 2025 NSBA poll found 38% of small firms worry about data privacy and security risk, 37% lack time or resources, and 34% are not convinced of clear ROI. Any one of those barriers is surmountable. All three together block most deployments before they start.

What the Research Says Actually Works

The patterns that separate successful AI deployments from the 95% are not mysterious. The studies agree on what distinguishes winners.

Data foundation first. 2025 benchmarking research shows 74% of growing SMBs are increasing data management investments, versus 47% of declining SMBs. The companies that get AI ROI are the companies that built the data foundation before the model layer.

Narrow scope, measured outcome. The deployments that work start with one function, one metric, one timebox. A lead-scoring model that either improves conversion by a measurable amount in 90 days or gets killed. A collections agent that either reduces days-sales-outstanding by a measurable amount in one quarter or gets killed. No vague "productivity gains."

Integration into an existing workflow. The tools that deliver value are the ones embedded in a workflow someone already runs. The tools that fail are the ones that require a new workflow nobody was asking for.

Someone accountable for the outcome. Not just the technology. The business outcome. When a named person owns the ROI of an AI deployment, it is much more likely to produce one.

The Implication for SMB Leaders

If you are an SMB leader considering AI, the research tells you something specific. The default outcome is failure. The path away from the default requires discipline the enterprise playbooks do not teach and the vendor marketing actively obscures.

That is why strategy work has to come before tool selection, not after. A ninety-day pilot with no defined outcome is not a strategy. A stack of subscriptions chosen because competitors use them is not a strategy. A shiny demo from a vendor who will not own the P&L outcome is not a strategy.

A real AI strategy names the problem, defines the measurable outcome, specifies the data requirements, identifies the accountable owner, and commits to kill the project if it does not deliver. That is what separates the 5% that produce impact from the 95% that produce nothing.

The research is clear. The path is narrow. The default is failure. Do the strategy first.

Want an AI strategy that avoids the 95%?

An AI Readiness Assessment identifies where AI will actually move your P&L, and where it will burn your budget. Book a discovery call.

Find Out More

Sources

  1. Fortune: MIT report, 95% of generative AI pilots at companies are failing (covering MIT Project NANDA, August 2025)
  2. Gartner: AI Projects in I&O Stall Ahead of Meaningful ROI Returns (April 2026 press release, surveying 782 I&O leaders)
  3. Pertama Partners: AI Project Failure Rate 2026
  4. WorkOS: Why most enterprise AI projects fail (citing McKinsey data on 88% adoption, 39% EBIT impact)
  5. SBA Office of Advocacy: Research Spotlight, AI In Business, Small Firms Closing In (September 2025)
  6. NSBA: New Data on AI Adoption, Trends in Small Businesses (2025 poll)
  7. Big Sur AI: AI Adoption in SMBs vs Enterprises, Rates, ROI, and Barriers 2025