Dwarkesh Patel
👤 PersonAppearances Over Time
Podcast Appearances
People have proposed different ways of charting how much progress you've made towards full AGI.
Because if you can come up with some line, then you can see where that line intersects with AGI and where that would happen on the x-axis.
And so people have proposed, oh, it's like the education level.
Like we had a high schooler and then they went to college with RL and they're going to get a PhD.
I don't like that one.
Or then they'll propose horizon length, so maybe they can do tasks that take a minute, they can do those autonomously, then they can autonomously do tasks that take a human an hour, a human a week, etc.
How do you think about what is the relevant y-axis here?
How should we think about how AI is making progress?
I wonder with radiologists,
I'm totally speculating.
I have no idea what the actual workflow of a radiologist involves.
But one analogy that might be applicable is when WAMOs are first being rolled out, there'd be a person sitting in the front seat, and you just had to have them there to make sure that if something went really wrong, they're there to monitor.
And I think even today, people are still watching to make sure things are going well.
Robotaxi, which was just deployed, actually still has a person inside it.
And we could be in a similar situation where
If you automate 99% of a job, that last 1% the human has to do is incredibly valuable because it's bottlenecking everything else.
And if it was the case with radiologists where the person sitting in the front of the Uber or the front of the Waymo has to be specially trained for years in order to be able to provide the last 1%, their wages should go up tremendously because they're the one thing bottlenecking wide deployment.
So radiologists, I think their wages have gone up for similar reasons.
If you're the last bottleneck, you're like...
And you're not fungible, which, like, you know, a Waymo driver might be fungible with other things.