https://www.reddit.com/r/SuccessionTV/s/2ENwY4JBmFDoes Artificial Intelligence (AI) technology pose a threat to human society? Any technology, in the wrong hands, may pose a threat to humans, after all technology is defined by its provision of higher capabilities than what we are naturally endowed with. It is true that men with evil intentions can make use of such capabilities to inflict ever more harm to their opponents. In the case of AI, the fear of a threat has reached hysterical heights, thanks to the efforts of a few, successfully funded organizations, such as MIRI (Machine Intelligence Research Institute), FHI (Future of Humanity Institute), and the new Cambridge neo-luddite nest known as FLI (Future of Life Institute). It seems that there is abundant money in producing fear, uncertainty, and doubt about AI technology, and these AI eschatology profiteers will not stop spouting nonsense about technology that barely exists.The psychological roots of such a fear are grounded in the good old anthropocentricism of our not-so-evolved culture, believing that humans occupy a central place in the universe, failing that, in our galaxy, and failing that, in our planet, and failing that, among the intelligent species, natural or artificial. The progress of science can be pictured as slowly stripping the slightly intelligent and hairless and upright walking ape of its irrational pride of being the most intelligent , or special species in a Darwinian world. What if another species upends the human superiority? Would that be a tragedy, as the ultra-conservative neo-luddite nests referred to would have you believe?The rhetoric of AI eschatologists, aside from their apparent imitation of doomsayer cults, is roughly based on an over-simplification of the science of AI, and many wrong assumptions, making improbable events seem probable. This is a general rhetoric schema of any doomsayer cult, as it is difficult to believe in prophecies of doom. AI doomsayers merely imitate this ancient, well-tested form. However, to reach their conclusions, they must also convince their proponents of certain falsehoods, otherwise, their funding would fail. That is why, they spend so much effort on publishing whimsical articles, essays, and books on a rather vacuous and trivial, intellectually dissatisfying subject. An intellectual pursuit, or any technological contribution is most certainlynot in their slightest interest. The continuation of abundant fear, however, is. In this article, I dispel these illusions, and summarize to you just what is so wrong and missing in their world-view, if you could call it that.The academic pretension of their published work is incredible. Recently, Nick Bostrom published a book on the subject, expanding upon some of his previous papers and talks, which, unfortunately, do not make any scientific sense, beyond being very poor imitations of some science fiction stories in the cyberpunk genre. In other words, as I would qualify as an AI scientist, “highly speculative”. Non-scientists and those who are not AI scientists would mistake these works as intellectually competent, knowledgeable writing, backed with clear scientific expertise (Bostrom does claim expertise in a number of technical fields, and promotes himself as the world leader in the subject of his book — AI eschatology, perhaps?), however, they are quite superficial and rife with trivial mistakes. The other interesting point is that, the members of these neo-luddite nests claim themselves to be “transhumanists”, however, they seem merely like the guardians of a decaying, and obsolete world-view to me, personally. Therefore, let us know them as who they truly are, ultra-conservatives and neo-luddites who pretend to be transhumanists so that they can infiltrate their communities. Bostrom himself is an unashamed creationist and eschatologist — who believes in a ridiculous “philosophical” version of christian eschatology, and promotes a new age variant of intel
No persons identified in this episode.
This episode hasn't been transcribed yet
Help us prioritize this episode for transcription by upvoting it.
Popular episodes get transcribed faster
Other recent transcribed episodes
Transcribed and ready to explore now
Eric Larsen on the emergence and potential of AI in healthcare
10 Dec 2025
McKinsey on Healthcare
Reducing Burnout and Boosting Revenue in ASCs
10 Dec 2025
Becker’s Healthcare -- Spine and Orthopedic Podcast
Dr. Erich G. Anderer, Chief of the Division of Neurosurgery and Surgical Director of Perioperative Services at NYU Langone Hospital–Brooklyn
09 Dec 2025
Becker’s Healthcare -- Spine and Orthopedic Podcast
Dr. Nolan Wessell, Assistant Professor and Well-being Co-Director, Department of Orthopedic Surgery, Division of Spine Surgery, University of Colorado School of Medicine
08 Dec 2025
Becker’s Healthcare -- Spine and Orthopedic Podcast
NPR News: 12-08-2025 2AM EST
08 Dec 2025
NPR News Now
NPR News: 12-08-2025 1AM EST
08 Dec 2025
NPR News Now