Dwarkesh Patel
๐ค SpeakerAppearances Over Time
Podcast Appearances
It's not net productive to build a custom training pipeline to identify what macrophages look like, given the specific way that this lab prepares slides, and then another training loop for the next lab-specific microtask, and so on.
What you actually need is an AI that can learn from semantic feedback or from self-directed experience, and then generalize the way a human does.
Every day, you have to do 100 things that require judgment, situational awareness, and skills and context that are learned on the job.
These tasks differ not just across different people, but even from one day to the next for the same person.
It is not possible to automate even a single job by just baking in a predefined set of skills, let alone all the jobs.
In fact, I think people are really underestimating how big a deal actual AGI will be because they are just imagining more of this current regime.
They're not thinking about billions of human-like intelligences on a server, which can copy and merge all the learnings.
And to be clear, I expect this, which is to say, I expect actual brain-like intelligences within the next decade or two, which is pretty fucking crazy.
Sometimes people will say that the reason that AIs are more widely deployed right now across firms and already providing lots of value outside of coding is that technology takes a long time to diffuse.
And I think this is Cope.
I think people are using this Cope to gloss over the fact that these models just lack the capabilities that are necessary for broad economic value.
If these models actually were like humans on a server, they'd diffuse incredibly quickly.
In fact, they'd be so much easier to integrate and onboard than a normal human employee is.
They could read your entire Slack and drive within minutes, and they could immediately distill all the skills that your other AI employees have.
Plus, the hiring market for humans is very much like a lemons market, where it's hard to tell who the good people are beforehand, and then obviously hiring somebody who turns out to be bad is very costly.
This is just not a dynamic that you would have to face or worry about if you're just spinning up another instance of a vetted HEI model.
So for these reasons, I expect it's going to be much easier to diffuse AI labor into firms than it is to hire a person.
And companies hire people all the time.
If the capabilities were actually at HEI level, people would be willing to spend trillions of dollars a year buying tokens that these models produce.
knowledge workers across the world cumulatively earn tens of trillions of dollars a year in wages.