Rob Wiblin
๐ค SpeakerAppearances Over Time
Podcast Appearances
And then that perspective says that AIs, if you can slot in AIs to replace not just the cognitive, but the cognitive and the physical, the entire package, and close the full loop of AIs doing everything needed to make more AIs, or AIs and robots doing everything needed to make more AIs and robots, then there's no reason to think that 2% is some sort of...
like physical law of the universe.
They can grow as fast as their physical constraints allow them to grow, which are not necessarily the same as the constraints that keep human-driven growth at 2%.
Yeah, I'm honestly not sure.
I think maybe one part of it is that... So I guess I'm partial to the things will be crazier side of things, so I'm not sure I'll be able to give a perfectly balanced account.
But I feel like one thing I've noticed in terms of people who think it'll be slower is that their worldview kind of has a built-in...
error theory uh of people who think things will go faster um so the the like worldview is not just things will keep ticking along but everyone thinks there will always be like some big new revolution everyone's always expected to speed up everyone's always expecting to speed up and they've been they've always been wrong so there's that dynamic which is like it's it's a like you know from their point of view i think it's it's totally reasonable it's like kind of like
even if there isn't some super knockdown argument in the terms of your interlocutor, where you can, like, point to a mistake that they'll accept, or even if you kind of look at the story and think it's kind of plausible, you still have this strong prior that, like,
Someone could have made the same argument about television.
Someone could have made the same argument about computers.
None of these played out.
So I think that's a big factor.
I also think there hasn't been like...
These are complicated ideas.
There hasn't been that much dialogue.
And I think there could be more.
And I think there could be more dialogue that is trying to ground things in like near term observations also.
But yeah, I think that's a big part of it.
I think they have like an error theory built in that makes it so that like.
the object level conversation about like, okay, like, you know, here's how the AI could make the robots and here's how the robots could bootstrap into more robots and so on.