Guillaume Verdon
๐ค SpeakerAppearances Over Time
Podcast Appearances
And in general, I don't think that we can predict the future with that much granularity because of chaos, right? If you have a complex system, you have some uncertainty and a couple of variables. If you let time evolve, You have this concept of a Lyapunov exponent, right? A bit of fuzz becomes a lot of fuzz in our estimate, exponentially so over time.
And I think we need to show some humility that we can't actually predict the future. All we know, the only prior we have is the laws of physics. And that's what we're arguing for. The laws of physics say the system will want to grow. And subsystems that are optimized for growth and replication are more likely in the future.
And I think we need to show some humility that we can't actually predict the future. All we know, the only prior we have is the laws of physics. And that's what we're arguing for. The laws of physics say the system will want to grow. And subsystems that are optimized for growth and replication are more likely in the future.
And I think we need to show some humility that we can't actually predict the future. All we know, the only prior we have is the laws of physics. And that's what we're arguing for. The laws of physics say the system will want to grow. And subsystems that are optimized for growth and replication are more likely in the future.
And so we should aim to maximize our current mutual information with the future. And the path towards that is for us to accelerate rather than decelerate. So I don't have a PDoom because I think that similar to... The quantum supremacy experiment at Google, I was in the room when they were running the simulations for that.
And so we should aim to maximize our current mutual information with the future. And the path towards that is for us to accelerate rather than decelerate. So I don't have a PDoom because I think that similar to... The quantum supremacy experiment at Google, I was in the room when they were running the simulations for that.
And so we should aim to maximize our current mutual information with the future. And the path towards that is for us to accelerate rather than decelerate. So I don't have a PDoom because I think that similar to... The quantum supremacy experiment at Google, I was in the room when they were running the simulations for that.
That was an example of a quantum chaotic system where you cannot even estimate probabilities of certain outcomes with even the biggest supercomputer in the world. And so that's an example of chaos. And I think the system is far too chaotic for anybody to have an accurate estimate of the likelihood of certain futures.
That was an example of a quantum chaotic system where you cannot even estimate probabilities of certain outcomes with even the biggest supercomputer in the world. And so that's an example of chaos. And I think the system is far too chaotic for anybody to have an accurate estimate of the likelihood of certain futures.
That was an example of a quantum chaotic system where you cannot even estimate probabilities of certain outcomes with even the biggest supercomputer in the world. And so that's an example of chaos. And I think the system is far too chaotic for anybody to have an accurate estimate of the likelihood of certain futures.
If they were that good, I think they would be very rich trading on the stock market.
If they were that good, I think they would be very rich trading on the stock market.
If they were that good, I think they would be very rich trading on the stock market.
I think to me, one of the biggest existential risks would be the concentration of the power of AI in the hands of the very few, especially if it's a mix between the companies that control the flow of information and the government. Because that could...
I think to me, one of the biggest existential risks would be the concentration of the power of AI in the hands of the very few, especially if it's a mix between the companies that control the flow of information and the government. Because that could...
I think to me, one of the biggest existential risks would be the concentration of the power of AI in the hands of the very few, especially if it's a mix between the companies that control the flow of information and the government. Because that could...
set things up for a sort of dystopian future where only a very few and an oligopoly in the government have AI and they could even convince the public that AI never existed. And that opens up sort of these scenarios for authoritarian centralized control, which to me is the darkest timeline. And the reality is that we have
set things up for a sort of dystopian future where only a very few and an oligopoly in the government have AI and they could even convince the public that AI never existed. And that opens up sort of these scenarios for authoritarian centralized control, which to me is the darkest timeline. And the reality is that we have
set things up for a sort of dystopian future where only a very few and an oligopoly in the government have AI and they could even convince the public that AI never existed. And that opens up sort of these scenarios for authoritarian centralized control, which to me is the darkest timeline. And the reality is that we have
We have a prior, we have a data-driven prior of these things happening, right? When you give too much power, when you centralize power too much, humans do horrible things, right? And to me, that has a much higher likelihood in my Bayesian inference than sci-fi-based priors, right? Like my prior came from the Terminator movie, right?