top of page

Dissecting the MIT NANDA Report


with the Metrics Brothers

The claim that “95% of AI projects fail” has become one of the most repeated talking points in enterprise AI. But where did it come from, and does it actually hold up?


In this episode, Dave "CAC" Kellogg and Ray "Growth" Rike take a detailed, data-driven look at the MIT NANDA report, titled The GenAI Divide: State of AI in Business 2025. They break down how the "95% fail rate" statistic went viral, why it stuck, and why the underlying evidence does not support such a sweeping conclusion.


What Ray and Dave cover:

  • Why the NANDA report is often mistaken for a peer-reviewed academic study when it is not

  • How ambiguous definitions of “failure” turn partial adoption into sensational headlines

  • Data inconsistencies and methodological gaps that undermine the 95% claim

  • The difference between failed AI initiatives and early-stage pilots or experiments

  • Why measuring AI success by the percent of projects is misleading compared to the business value created

  • The rise of Shadow AI and employee-driven adoption, and why that may be a feature, not a flaw

  • How the report’s conclusions conveniently align with the authors’ proposed NANDA architecture

  • The real issues enterprises face with AI: workflow integration, governance, and change management


The episode also discusses why personal productivity gains still matter to the P&L, even if they do not appear as a clear line item, and why fear-driven AI narratives can do real damage within organizations.

bottom of page