Dwarkesh Podcast
2027 Intelligence Explosion: Month-by-Month Model β Scott Alexander & Daniel Kokotajlo
And so he just said, give you magnesium supplements that cured a lot of migraines.
Dwarkesh Podcast
2027 Intelligence Explosion: Month-by-Month Model β Scott Alexander & Daniel Kokotajlo
So why aren't LLMs able to leverage this enormous asymmetric advantage they have to make a single new discovery like this?
Dwarkesh Podcast
2027 Intelligence Explosion: Month-by-Month Model β Scott Alexander & Daniel Kokotajlo
I will actually disagree with this.
Dwarkesh Podcast
2027 Intelligence Explosion: Month-by-Month Model β Scott Alexander & Daniel Kokotajlo
We know that humans can do... We have examples of humans doing this.
Dwarkesh Podcast
2027 Intelligence Explosion: Month-by-Month Model β Scott Alexander & Daniel Kokotajlo
I agree that we don't have logical omniscience because there is a combinatorial explosion.
Dwarkesh Podcast
2027 Intelligence Explosion: Month-by-Month Model β Scott Alexander & Daniel Kokotajlo
But we are able to leverage our intelligence to... Actually, one of my favorite examples of this is David Anthony, the guy who wrote The Horse, The Wheel, and Language.
Dwarkesh Podcast
2027 Intelligence Explosion: Month-by-Month Model β Scott Alexander & Daniel Kokotajlo
He made this...
Dwarkesh Podcast
2027 Intelligence Explosion: Month-by-Month Model β Scott Alexander & Daniel Kokotajlo
It was a super impressive discovery before we had the genetic evidence for it, like a decade before, where he said, look, if I look at all these languages in India and Europe, they all share the same etymology.
Dwarkesh Podcast
2027 Intelligence Explosion: Month-by-Month Model β Scott Alexander & Daniel Kokotajlo
I mean, literally what you're talking about, the same etymology for words like wheel and cart and horse.
Dwarkesh Podcast
2027 Intelligence Explosion: Month-by-Month Model β Scott Alexander & Daniel Kokotajlo
And these are technologies that have only been around for the last 6,000 years, which must mean that there was some group thatβ
Dwarkesh Podcast
2027 Intelligence Explosion: Month-by-Month Model β Scott Alexander & Daniel Kokotajlo
these groups are all at least linguistically descended from.
Dwarkesh Podcast
2027 Intelligence Explosion: Month-by-Month Model β Scott Alexander & Daniel Kokotajlo
And now we have genetic evidence for the Yamnaya, which we believe is this group.
Dwarkesh Podcast
2027 Intelligence Explosion: Month-by-Month Model β Scott Alexander & Daniel Kokotajlo
You have a blog where you do this.
Dwarkesh Podcast
2027 Intelligence Explosion: Month-by-Month Model β Scott Alexander & Daniel Kokotajlo
This is your job, Scott.
Dwarkesh Podcast
2027 Intelligence Explosion: Month-by-Month Model β Scott Alexander & Daniel Kokotajlo
So why shouldn't we hold the fact that language models can't do this more against them?
Dwarkesh Podcast
2027 Intelligence Explosion: Month-by-Month Model β Scott Alexander & Daniel Kokotajlo
I mean, it seems like such an economically valuable... But how would you set up the training environment?
Dwarkesh Podcast
2027 Intelligence Explosion: Month-by-Month Model β Scott Alexander & Daniel Kokotajlo
Well, maybe that's why you should have longer timelines.
Dwarkesh Podcast
2027 Intelligence Explosion: Month-by-Month Model β Scott Alexander & Daniel Kokotajlo
It's a gnarly engineering problem.
Dwarkesh Podcast
2027 Intelligence Explosion: Month-by-Month Model β Scott Alexander & Daniel Kokotajlo
And then let me just address that in a second, but just one final thought on this thread.
Dwarkesh Podcast
2027 Intelligence Explosion: Month-by-Month Model β Scott Alexander & Daniel Kokotajlo
To the extent that there's like a modus ponens, modus tollens thing here, where one thing you could say is like, look, AIs, not just LLMs, but AIs will have this fundamental asymmetric advantage where they know all this shit.