Nathaniel Whittemore
๐ค SpeakerAppearances Over Time
Podcast Appearances
SOC 2 Type 2 certified, curated integrations, tighter security perimeter.
Enterprise grade from day one.
Model agnostic and works from Slack or Telegram.
Try it at zenflow.free.
Welcome back to the AI Daily Brief.
We have discussed enterprise AI, implicitly or by extension, quite a bit recently without necessarily going super deep on what recent numbers are telling us.
I shared the AI maturity maps framework last week, which is a way of looking at AI readiness and AI adoption across six different dimensions, including deployment depth, systems integration and governance, and shared a bit about what our research had told us about where organizations are right now and why we think in many cases it's behind where they need to be.
But of course, that's different than digging into the actual numbers themselves.
And recently, we've gotten a bunch of different studies, all with direct sourcing from inside companies, that are telling some similar and some different stories about enterprise AI.
So what we're going to do today is talk through what those studies are telling us, where they agree, where they disagree, and what I think the sum total is, and why even all of this still might be missing something.
First up, we have some research from A16Z.
Now, where this data comes from is the aggregation of private data from a number of leading enterprise AI startups who live and work inside many of these big corporations.
Here's a couple of the highlight numbers.
A16z found that about 19% of the global 2,000 are live paying customers of a leading AI startup, with that number rising to 29% of the Fortune 500.
That means the enterprises have signed a top-down contract with an AI startup, successfully converted a pilot, and gone live with the product in their organization.
Now, 29% might seem low, but as you heard, that does not include pilot efforts, nor, my guess, is it comprehensive across every tool that companies might be using.
Their next exploration is what is actually working.
And here's their methodology.
A16z writes, we find that the most indicative way to assess the work the models are inherently better at doing
is to overlay revenue momentum across use cases against the theoretical capabilities of models as defined by GDPVal.