Ege Erdil
๐ค PersonAppearances Over Time
Podcast Appearances
An insight that is related and is quite important here is that the tasks that humans seem to struggle on and AI systems seem to make much faster progress on are things that have emerged fairly recently in evolutionary time.
So advanced language use,
emerged in humans maybe 100,000 years ago, and certainly playing chess and Go and so on are very recent innovations.
And so evolution has had much less time to optimize for them, in part because they're very new, but also in part because when they emerged, there was a lot less pressure because it conferred
kind of small fitness gains to humans.
And so evolution didn't optimize for these things very strongly.
And so it's not surprising that on these specific tasks that humans find very impressive when other humans are able to do it, that AI systems are able to make a lot of fast progress.
In humans, these things are often very strongly correlated with other kind of competencies, like being good at just achieving your goals or being a good coder is often...
very strongly correlated with solving coding problems, or being a good engineer is often correlated with solving competitive coding problems.
But in AI systems, the correlation isn't quite as strong.
And even within AI systems, it's the case that, you know, the strongest systems on competitive programming are not even the ones that are best at actually helping you code.
So, like, you know, O3 Mini's high seems to be maybe the best at solving competitive code problems, but it isn't the best at actually helping you write code.
But an important insight here is that the things that we find very impressive when humans are able to do it, we should expect that AI systems are able to make a lot more progress on that.
But we shouldn't update too strongly about just their general competence or something, because we should recognize that this is a very narrow subset of relevant tasks that humans do in order to be a competent, economically valuable agent.
Yeah.
I remember talking to a very senior person who's now at Anthropic in 2017.
And then he told various people that they shouldn't do a PhD because by the time they completed it, everyone would be automated.
Maybe before we talk about that, I think just like a very important point to make here, which I think underlies some of this disagreement that we have with others,
about both this argument from the transition from non-human animals to humans is this focus on intelligence and reasoning and R&D, which is enabled by that intelligence, as being just enormously important.
And so if you think that you get this very important difference from this transition from primates, non-human primates, to humans, then you think that in some sense you get this enormously important unlock