Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Nathaniel Whittemore

๐Ÿ‘ค Speaker
14492 total appearances

Appearances Over Time

Podcast Appearances

Offering a guaranteed food budget and a pod to spend the night in in return for further disempowerment is incredibly tone-deaf and should be expected to provoke more, not less, outrage.

The point that I'm making ultimately is that AI is not an independent issue, but is becoming a perfect cauldron because it concentrates every larger grievance that is downstream from economics simultaneously.

Job displacement anxiety is broader and more personal than any previous automation wave.

Wealth concentration is visible and extreme and AI creates a new face for it.

Existential risk rhetoric acts as a moral urgency multiplier.

And AI leaders keep saying the quiet part loud.

Investor Jack Raines wrote, The majority of Americans quote-unquote hate AI.

Of course, that shouldn't be a surprise when the CEOs of the three biggest AI labs in America are all basically saying the entire white-collar labor force is just a few years away from getting brutally job-mogged by LLMs.

and maybe most importantly from the standpoint of something that can be actively changed, is that to many people, democratic channels appear blocked.

In a paper called Artificial Intelligence, the Common Good, and the Democratic Deficit in AI Governance, Mark Kokelberg warned of a, quote, tendency to deny the inherent political character of the issue and to take a technocratic shortcut, producing a small technocratic elite that rules a mass of angry citizens who rightly complain they are not heard.

Now, it would be overambitious even for me to try to now in the last few minutes of a podcast, create a map for where we need to go.

But it is clear that there are going to be three dimensions of this if we want to turn back the tide of violent AI populism.

The first is we need to restore or create for the first time credible democratic channels for AI governance.

And this is going to be genuinely uncomfortable for the industry.

It's not for sure, but it may be the case that accepting meaningful regulation may be the single most effective de-escalation tool available.

You get a sense in Sam Altman's blog post that he may be coming to the same conclusion.

In his analogy to Sauron's one ring, he writes that the only solution I can come up with is for no one to have the ring.

And he says that of the two obvious ways to do this, one is individual empowerment and the other is, in his words, making sure democratic systems stay in control.

The problem many would point out is that that hasn't necessarily been the posture of the AI labs vis-a-vis the governance process.

And so figuring out how to empower democratic governance over AI seems like it's going to be an increasingly important problem to solve.