Cal Newport
๐ค SpeakerAppearances Over Time
Podcast Appearances
is what I call distributed AGI.
And I think that's just what the future is going to be, which is you have
specialized applications for different things where, oh, we want to do this thing over here.
We built something that has some AI in it and maybe it has an LLM or it's a modular architecture and it has a billion parameter model in there and a world model.
And it's really good at doing this thing and it's small and it mainly runs on chip.
And now this program can do this thing that I used to have to do.
And you multiply that across 10,000 different use cases and you're like, oh, we kind of have AGI, right?
There's all these different things that have AI tools that like do pretty well.
That's like a completely, probably the most probable future.
It's a future I really like for a lot of reasons.
There'll be a lot of things that we can't make progress on, a lot of things we will, but it's a much more heterogeneous future.
There's no giant HAL 9000 brains as economically more interesting and diverse.
It doesn't have all the sustainability issues.
That has to be the future.
But the problem about that future, if you're Sam Altman or Dario Amadei, is that their entire moat is unless you need 10 trillion parameters, they want that to be the key to the AI future because that moat is something that no one can cross.
And if that's not the moat, if it's just, oh, if I want to build a poker playing AI that's really good, I just need people who are good at poker and to spend a couple of years and figure out a cool custom system.
And that thing now does well.
If that's the future, you don't need open AI and you don't need Anthropic.
And I think that probably might be the future.
And I think that's terrifying.