Guillaume Verdon
๐ค SpeakerAppearances Over Time
Podcast Appearances
Out of responsibility to the future humans we could carry with higher carrying capacity by scaling up civilization. Out of responsibility to those humans, I think we have to make the greater, grander future happen.
Out of responsibility to the future humans we could carry with higher carrying capacity by scaling up civilization. Out of responsibility to those humans, I think we have to make the greater, grander future happen.
Out of responsibility to the future humans we could carry with higher carrying capacity by scaling up civilization. Out of responsibility to those humans, I think we have to make the greater, grander future happen.
I think, like I said, the market will exhibit caution. Every organism, company, consumer is acting out of self-interest and they won't assign capital to things that have negative utility to them.
I think, like I said, the market will exhibit caution. Every organism, company, consumer is acting out of self-interest and they won't assign capital to things that have negative utility to them.
I think, like I said, the market will exhibit caution. Every organism, company, consumer is acting out of self-interest and they won't assign capital to things that have negative utility to them.
Well, that's why we need freedom of information, freedom of speech, and freedom of thought in order to converge, be able to converge on the subspace of technologies that have positive utility for us all.
Well, that's why we need freedom of information, freedom of speech, and freedom of thought in order to converge, be able to converge on the subspace of technologies that have positive utility for us all.
Well, that's why we need freedom of information, freedom of speech, and freedom of thought in order to converge, be able to converge on the subspace of technologies that have positive utility for us all.
I'm not a fan of that calculation. I think people just throw numbers out there. It's a very sloppy calculation, right? To calculate a probability, let's say you model the world as some sort of Markov process, if you have enough variables or hidden Markov process. You need to do a stochastic path integral through the space of all possible futures, not just...
I'm not a fan of that calculation. I think people just throw numbers out there. It's a very sloppy calculation, right? To calculate a probability, let's say you model the world as some sort of Markov process, if you have enough variables or hidden Markov process. You need to do a stochastic path integral through the space of all possible futures, not just...
I'm not a fan of that calculation. I think people just throw numbers out there. It's a very sloppy calculation, right? To calculate a probability, let's say you model the world as some sort of Markov process, if you have enough variables or hidden Markov process. You need to do a stochastic path integral through the space of all possible futures, not just...
the futures that your brain naturally steers towards, right? I think that the estimators of PDU are biased because of our biology, right? We've evolved to... have biased sampling towards negative futures that are scary because that was an evolutionary optimum, right?
the futures that your brain naturally steers towards, right? I think that the estimators of PDU are biased because of our biology, right? We've evolved to... have biased sampling towards negative futures that are scary because that was an evolutionary optimum, right?
the futures that your brain naturally steers towards, right? I think that the estimators of PDU are biased because of our biology, right? We've evolved to... have biased sampling towards negative futures that are scary because that was an evolutionary optimum, right?
And so, people that are of, let's say, higher neuroticism will just think of negative futures where everything goes wrong all day every day and claim that they're doing unbiased sampling. And in a sense, like, they're not normalizing for the space of all possibilities and the space of all possibilities is like super exponentially large. And it's very hard to have this estimate.
And so, people that are of, let's say, higher neuroticism will just think of negative futures where everything goes wrong all day every day and claim that they're doing unbiased sampling. And in a sense, like, they're not normalizing for the space of all possibilities and the space of all possibilities is like super exponentially large. And it's very hard to have this estimate.
And so, people that are of, let's say, higher neuroticism will just think of negative futures where everything goes wrong all day every day and claim that they're doing unbiased sampling. And in a sense, like, they're not normalizing for the space of all possibilities and the space of all possibilities is like super exponentially large. And it's very hard to have this estimate.
And in general, I don't think that we can predict the future with that much granularity because of chaos, right? If you have a complex system, you have some uncertainty and a couple of variables. If you let time evolve, You have this concept of a Lyapunov exponent, right? A bit of fuzz becomes a lot of fuzz in our estimate, exponentially so over time.
And in general, I don't think that we can predict the future with that much granularity because of chaos, right? If you have a complex system, you have some uncertainty and a couple of variables. If you let time evolve, You have this concept of a Lyapunov exponent, right? A bit of fuzz becomes a lot of fuzz in our estimate, exponentially so over time.