Max Read
๐ค SpeakerAppearances Over Time
Podcast Appearances
But I think it's fair to say the sort of main idea is that human beings can and in fact should develop their reasoning skills to better approach the world, to better pursue good political outcomes, economic outcomes, philanthropic outcomes, personal outcomes.
So in practice, this means having very long, very prolix conversations with other rationalists, usually online, like on forums, following chains of logic sort of deep as far as they possibly go.
So in practice, this means having very long, very prolix conversations with other rationalists, usually online, like on forums, following chains of logic sort of deep as far as they possibly go.
So in practice, this means having very long, very prolix conversations with other rationalists, usually online, like on forums, following chains of logic sort of deep as far as they possibly go.
And even if they come to absurd conclusions, taking those conclusions seriously so long as the logic seems sound, experimenting with sort of cognitive hacks or what they sometimes call debugging tricks to sort of eliminate bias and think more rationally in their lives.
And even if they come to absurd conclusions, taking those conclusions seriously so long as the logic seems sound, experimenting with sort of cognitive hacks or what they sometimes call debugging tricks to sort of eliminate bias and think more rationally in their lives.
And even if they come to absurd conclusions, taking those conclusions seriously so long as the logic seems sound, experimenting with sort of cognitive hacks or what they sometimes call debugging tricks to sort of eliminate bias and think more rationally in their lives.
Rationalism has been very influential in the AI research community, in part because a sort of original set of concerns among maybe the most prominent rationalist, a man named Eliezer Yudkowsky, is about the inevitability or likelihood of a coming superintelligence and possibility the need to ensure that this superintelligence is aligned with human values and morality.
Rationalism has been very influential in the AI research community, in part because a sort of original set of concerns among maybe the most prominent rationalist, a man named Eliezer Yudkowsky, is about the inevitability or likelihood of a coming superintelligence and possibility the need to ensure that this superintelligence is aligned with human values and morality.
Rationalism has been very influential in the AI research community, in part because a sort of original set of concerns among maybe the most prominent rationalist, a man named Eliezer Yudkowsky, is about the inevitability or likelihood of a coming superintelligence and possibility the need to ensure that this superintelligence is aligned with human values and morality.
Just to give a sort of flavor of what rationalists thought, often the kind of crazy thought experiment that ends up being taken as, if not gospel, at least something to take seriously, is a famous thought experiment called Rocco's Basilisk.
Just to give a sort of flavor of what rationalists thought, often the kind of crazy thought experiment that ends up being taken as, if not gospel, at least something to take seriously, is a famous thought experiment called Rocco's Basilisk.
Just to give a sort of flavor of what rationalists thought, often the kind of crazy thought experiment that ends up being taken as, if not gospel, at least something to take seriously, is a famous thought experiment called Rocco's Basilisk.
The idea of which is, if there is a far future super intelligence that is going to come, it is likely to punish anybody who prevented it from coming into existence. And it will have the power to copy your brain onto its hardware in some kind of simulation and torture you for eternity.
The idea of which is, if there is a far future super intelligence that is going to come, it is likely to punish anybody who prevented it from coming into existence. And it will have the power to copy your brain onto its hardware in some kind of simulation and torture you for eternity.
The idea of which is, if there is a far future super intelligence that is going to come, it is likely to punish anybody who prevented it from coming into existence. And it will have the power to copy your brain onto its hardware in some kind of simulation and torture you for eternity.
So if you spend any time at all thinking about this coming superintelligence but not helping it come into existence, then you may be damning yourself or like a copy of you, which would be functionally equivalent to you, to an endless simulated hell, basically.
So if you spend any time at all thinking about this coming superintelligence but not helping it come into existence, then you may be damning yourself or like a copy of you, which would be functionally equivalent to you, to an endless simulated hell, basically.
So if you spend any time at all thinking about this coming superintelligence but not helping it come into existence, then you may be damning yourself or like a copy of you, which would be functionally equivalent to you, to an endless simulated hell, basically.
Yeah. I like the thought experiment. I probably because I don't think it's real. Do you know what I mean?