Guillaume Verdon
๐ค SpeakerAppearances Over Time
Podcast Appearances
We have a prior, we have a data-driven prior of these things happening, right? When you give too much power, when you centralize power too much, humans do horrible things, right? And to me, that has a much higher likelihood in my Bayesian inference than sci-fi-based priors, right? Like my prior came from the Terminator movie, right?
We have a prior, we have a data-driven prior of these things happening, right? When you give too much power, when you centralize power too much, humans do horrible things, right? And to me, that has a much higher likelihood in my Bayesian inference than sci-fi-based priors, right? Like my prior came from the Terminator movie, right?
And so when I talk to these AI doomers, I just ask them to trace a path through this Markov chain of events that would lead to our doom, right? And to actually give me a good probability for each transition. And very often... there's a unphysical or highly unlikely transition in that chain, right?
And so when I talk to these AI doomers, I just ask them to trace a path through this Markov chain of events that would lead to our doom, right? And to actually give me a good probability for each transition. And very often... there's a unphysical or highly unlikely transition in that chain, right?
And so when I talk to these AI doomers, I just ask them to trace a path through this Markov chain of events that would lead to our doom, right? And to actually give me a good probability for each transition. And very often... there's a unphysical or highly unlikely transition in that chain, right?
But of course, we're wired to fear things and we're wired to respond to danger and we're wired to deem the unknown to be dangerous because that's a good heuristic for survival, right? But there's much more to lose out of fear, right? We have so much to lose, so much upside to lose by preemptively stopping the positive futures from happening out of fear.
But of course, we're wired to fear things and we're wired to respond to danger and we're wired to deem the unknown to be dangerous because that's a good heuristic for survival, right? But there's much more to lose out of fear, right? We have so much to lose, so much upside to lose by preemptively stopping the positive futures from happening out of fear.
But of course, we're wired to fear things and we're wired to respond to danger and we're wired to deem the unknown to be dangerous because that's a good heuristic for survival, right? But there's much more to lose out of fear, right? We have so much to lose, so much upside to lose by preemptively stopping the positive futures from happening out of fear.
And so I think that we shouldn't give in to fear. Fear is the mind killer. I think it's also the civilization killer.
And so I think that we shouldn't give in to fear. Fear is the mind killer. I think it's also the civilization killer.
And so I think that we shouldn't give in to fear. Fear is the mind killer. I think it's also the civilization killer.
I do think that right now there's a bias towards over centralization of AI because of compute density and centralization of data and how we're training models. I think over time, we're going to run out of data to scrape over the internet.
I do think that right now there's a bias towards over centralization of AI because of compute density and centralization of data and how we're training models. I think over time, we're going to run out of data to scrape over the internet.
I do think that right now there's a bias towards over centralization of AI because of compute density and centralization of data and how we're training models. I think over time, we're going to run out of data to scrape over the internet.
And I think that, well, actually, I'm working on increasing the compute density so that compute can be everywhere and acquire information and test hypotheses in the environment in a distributed way.
And I think that, well, actually, I'm working on increasing the compute density so that compute can be everywhere and acquire information and test hypotheses in the environment in a distributed way.
And I think that, well, actually, I'm working on increasing the compute density so that compute can be everywhere and acquire information and test hypotheses in the environment in a distributed way.
I think that fundamentally centralized cybernetic control, so having one intelligence that is massive, that fuses many sensors and is trying to perceive the world accurately, predict it accurately, predict many, many variables and control it, enact its will upon the world, I think that's just never possible. been the optimum, right?
I think that fundamentally centralized cybernetic control, so having one intelligence that is massive, that fuses many sensors and is trying to perceive the world accurately, predict it accurately, predict many, many variables and control it, enact its will upon the world, I think that's just never possible. been the optimum, right?
I think that fundamentally centralized cybernetic control, so having one intelligence that is massive, that fuses many sensors and is trying to perceive the world accurately, predict it accurately, predict many, many variables and control it, enact its will upon the world, I think that's just never possible. been the optimum, right?