Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Nathaniel Whittemore

๐Ÿ‘ค Speaker
14492 total appearances

Appearances Over Time

Podcast Appearances

One key strand of the discourse was to blame the pause AI and X-risk folks for effectively inciting violence.

This opinion, for example, was found in a subtext post by the dossier's Jordan Schachtel who wrote,

AI doomers built a radical ideology, now their followers are acting on it.

The movement that warned AI would end humanity has spawned a new wave of political violence.

He writes, For years, a well-funded and pedigreed coalition of effective altruist-aligned intellectuals in Silicon Valley โ€” we can call them AI doomers โ€” have prosecuted a very specific argument.

Their claim is not that AI is annoying or economically disruptive or bad for teenagers on social media.

The Doomer-funded Center for AI Studies' now-famous 2023 statement, signed by hundreds of AI researchers and executives, placed AI risk alongside nuclear weapons as a priority risk.

He then goes on to give a number of examples and comes to this point.

Here is the paradox those thinkers have never adequately resolved.

If the threat is truly existential, then what moral framework permits you to only write strongly worded op-eds and conference circuit speeches?

It is a serious philosophical problem baked into the utilitarian ethics that most EAs and AI safety advocates openly embrace.

The larger the harm, the more extreme the justified response.

If their probability estimate for AI-caused extinction is even modestly non-trivial, and they are a consistent utilitarian, the math starts generating conclusions that civilization-minded people should find alarming.

Petitions and policy advocacy are preferred, sure, but when those institutions are deemed to have failed, when the compute keeps scaling and the AI companies keep shipping, at what point does democratic incrementalism become a moral abdication?

Now from there, he basically argues that while leading AI safetyists have, in his words, commendably condemned the violence, they haven't answered that core question about the implications of their words.

And it is absolutely true that many, many prominent voices have condemned the violence.

If you would ever consider trying to hurt someone to slow AI progress, please do not effing do it.