Stephen McAleese
๐ค SpeakerAppearances Over Time
Podcast Appearances
What's puzzling is how two highly intelligent people can live in the same world but come to radically different conclusions.
Some people, such as the authors, view an existential catastrophe from AI as a near certainty, while others see it as a remote possibility e many of the critics.
My explanation is that both groups are focusing on different parts of the evidence.
By describing both views, I've attempted to assemble the full picture.
So what should we believe about the future of AI?
Deciding what to do based on an inside view, detailed technical arguments about how future AI might work, is problematic because the inside views about the future of AI vary drastically as I have shown.
Perhaps a more robust approach that seems more likely to lead to a consensus is the outside view.
Thinking about advanced AI as another instance of a highly advanced and impactful technology like the internet, nuclear energy, or biotechnology.
In The Precipice by Toby Ord, the author studies several sources of existential risk and concludes that most existential risk comes from technology, not natural events.
Whereas an asteroid might strike every hundred thousand years, nuclear weapons have only existed for a few decades and there have been several close calls already.
This suggests that high-tech eras are inherently unstable and dangerous until humanity's institutional wisdom catches up with its technical power.
A final recommendation, which comes from the book Superintelligence is to pursue actions that are robustly good.
Actions that would be considered desirable from a variety of different perspectives such as AI safety research, international cooperation between companies and countries, and the establishment of AI red lines.
Specific behaviors such as autonomous hacking that are unacceptable.
Heading.
Appendix.
Other high-quality reviews of the book.
If anyone builds it, everyone dies review.
How AI could kill us all, The Guardian.
Book review.