Stephen Wolfram
๐ค SpeakerVoice Profile Active
This person's voice can be automatically recognized across podcast episodes using AI voice matching.
Appearances Over Time
Podcast Appearances
They have all kinds of computational irreducibility.
We don't understand what the natural world is doing.
Occasionally, when you say, are the AIs going to wipe us out, for example?
Well, it's kind of like, is the machination of the AIs going to lead to this thing that eventually comes and destroys the species?
Well, we can also ask the same thing about the natural world.
Is the machination of the natural world going to eventually lead to this thing that's going to make the Earth explode or something like this?
Those are questions.
And insofar as we think we understand what's happening in the natural world, that's a result of science and natural science and so on.
One of the things we can expect when there's this giant infrastructure of the AIs is that's where we have to kind of invent a new kind of natural science that kind of is the natural science that explains to us how the AIs work.
It's kind of like we have a horse or something, and we're trying to ride the horse and go from here to there.
We don't really understand how the horse works inside, but we can get certain rules and certain approaches that we take to persuade the horse to go from here to there and take us there.
And that's the same type of thing that we're kind of dealing with, with the sort of incomprehensible
computationally irreducible AIs, but we can identify these kinds of, we can find these kind of pockets of reducibility that we can kind of, you know, I don't know, we're grabbing onto the mane of the horse or something to be able to ride it, or we figure out, you know, if we do this or that to ride the horse, that that's a successful way to get it to do what we're interested in doing.
I think some of these arguments about kind of, you know, there'll always be a smarter AI, there'll always be, you know, and eventually the AIs will get smarter than us, and then all sorts of terrible things will happen.
To me, some of those arguments remind me of kind of the ontological arguments for the existence of God and things like this.
They're kind of arguments that are based on some particular model, fairly simple model often, of kind of there is always a greater this, that, and the other.
You know, this is, and that's, you know, those arguments tend, what tends to happen
in the sort of reality of how these things develop is that it's more complicated than you expect.
That the kind of simple, logical argument that says, oh, eventually there'll be a super intelligence and then it will do this and that, turns out not to really be the story.
It turns out to be a more complicated story.