Azeem Azhar
๐ค SpeakerAppearances Over Time
Podcast Appearances
We don't really know exactly what this is going to mean to companies and their willingness to continue to invest in these types of AI projects.
But the evidence is slowly but surely building up.
I think there's another point with open AI or with any of these AI companies, and that is the disruption premium.
When you're a venture investor and you're investing in an early stage company, you're not really thinking about whether it will beat the NASDAQ.
You're thinking about convexity.
You're thinking about, does this give me
a chance for a really, really exceptional outcome, a Google-style outcome, a Figma-style outcome, a Spotify-style outcome, or indeed an OpenAI-style outcome if you're one of the earlier investors.
That bet is something that, if you look at technology companies at this early stage, is in a way the one that you might be thinking.
It's really interesting historically when you look at stock market returns, and I recommend people read a paper by Besson Binder, Arizona University finance prof, where he looks at stock market returns over 100 years and sort of identifies that they're very much clustered around a few dozen companies.
And when I looked a bit deeper, many of those few dozen companies were connected to the infrastructure technologies of the time, like the car or electricity or computing.
So there is this sense that when you are in a technology transition, that value will accrue disproportionately.
And I think we're to certain sectors, and we're starting to see that in the fact that the Magnificent Seven is driving the bulk of America's stock market returns.
So perhaps there is this disruption premium, this sort of wager that you're not really after 17% or 20%.
You're after the chance of something that's much, much higher.
I'm not even sure that any of this depends on AGI, artificial general intelligence.
Long-term listeners and readers will know that I have lots of problems with the way that term is conceptualized and thrown about.
I don't think it's a
particularly useful definition or a helpful one, but lets us play with it for the time being, whatever AGI means to you.
I don't think that any of this is dependent on super capable AI systems that can do thousands of hours of work without anyone supervising them, even though that's what the
evaluation seem to suggest they'll be able to do within 5, 10, 15, 20 years.