Dwarkesh Patel
๐ค SpeakerAppearances Over Time
Podcast Appearances
They're not thinking about billions of human-like intelligences on a server, which can copy and merge all the learnings.
And to be clear, I expect this, which is to say, I expect actual brain-like intelligences within the next decade or two, which is pretty fucking crazy.
Sometimes people will say that the reason that AIs are more widely deployed right now across firms and already providing lots of value outside of coding is that technology takes a long time to diffuse.
And I think this is Cope.
I think people are using this Cope to gloss over the fact that these models just lack the capabilities that are necessary for broad economic value.
If these models actually were like humans on a server, they'd diffuse incredibly quickly.
In fact, they'd be so much easier to integrate and onboard than a normal human employee is.
They could read your entire Slack and drive within minutes, and they could immediately distill all the skills that your other AI employees have.
Plus, the hiring market for humans is very much like a lemons market, where it's hard to tell who the good people are beforehand, and then obviously hiring somebody who turns out to be bad is very costly.
This is just not a dynamic that you would have to face or worry about if you're just spinning up another instance of a vetted HEI model.
So for these reasons, I expect it's going to be much easier to diffuse AI labor into firms than it is to hire a person.
And companies hire people all the time.
If the capabilities were actually at HEI level, people would be willing to spend trillions of dollars a year buying tokens that these models produce.
knowledge workers across the world cumulatively earn tens of trillions of dollars a year in wages.
And the reason that labs are orders of magnitude off this figure right now is that the models are nowhere near as capable as human knowledge workers.
Now, you might be like, look, how can the standard have suddenly become labs have to earn tens of trillions of dollars of revenue a year, right?
Like, until recently, people were saying, can these models reason?
Do these models have common sense?
Are they just doing pattern recognition?
And obviously, AI bulls are right to criticize AI bears for repeatedly moving these goalposts.