Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing
Podcast Image

Consistently Candid

Technology Society & Culture

Activity Overview

Episode publication activity over the past year

Episodes

#20 Frances Lorenz on the emotional side of AI x-risk, being a woman in a male-dominated online space & more

14 May 2025

Contributed by Lukas

In this episode, I chatted with Frances Lorenz, events associate at the Centre for Effective Altruism. We covered our respective paths into AI safety,...

#19 Gabe Alfour on why AI alignment is hard, what it would mean to solve it & what ordinary people can do about existential risk

13 Apr 2025

Contributed by Lukas

Gabe Alfour is a co-founder of Conjecture and an advisor to Control AI, both organisations working to reduce risks from advanced AI. We discussed why...

#18 Nathan Labenz on reinforcement learning, reasoning models, emergent misalignment & more

02 Mar 2025

Contributed by Lukas

A lot has happened in AI since the last time I spoke to Nathan Labenz of The Cognitive Revolution, so I invited him back on for a whistlestop tour of ...

#17 Fun Theory with Noah Topper

08 Nov 2024

Contributed by Lukas

The Fun Theory Sequence is one of Eliezer Yudkowsky's cheerier works, and considers questions such as 'how much fun is there in the universe...

#16 John Sherman on the psychological experience of learning about x-risk and AI safety messaging strategies

30 Oct 2024

Contributed by Lukas

John Sherman is the host of the For Humanity Podcast, which (much like this one!) aims to explain AI safety to a non-expert audience. In this episode,...

#14 Buck Shlegeris on AI control

16 Oct 2024

Contributed by Lukas

Buck Shlegeris is the CEO of Redwood Research, a non-profit working to reduce risks from powerful AI. We discussed Redwood's research into AI con...

#13 Aaron Bergman and Max Alexander debate the Very Repugnant Conclusion

08 Sep 2024

Contributed by Lukas

In this episode, Aaron Bergman and Max Alexander are back to battle it out for the philosophy crown, while I (attempt to) moderate. They discuss the V...

#12 Deger Turan on all things forecasting

21 Aug 2024

Contributed by Lukas

Deger Turan is the CEO of forecasting platform Metaculus and president of the AI Objectives Institute. In this episode, we discuss how forecasting can...

#11 Katja Grace on the AI Impacts survey, the case for slowing down AI & arguments for and against x-risk

20 Jun 2024

Contributed by Lukas

Katja Grace is the co-founder of AI Impacts, a non-profit focused on answering key questions about the future trajectory of AI development, which is b...

#10 Nathan Labenz on the current AI state-of-the-art, the Red Team in Public project, reasons for hope on AI x-risk & more

09 Jun 2024

Contributed by Lukas

Nathan Labenz is the founder of AI content-generation platform Waymark and host of The Cognitive Revolution Podcast, who now works full-time on tracki...

#9 Sneha Revanur on founding Encode Justice, California's SB-1047, and youth advocacy for safe AI development

15 May 2024

Contributed by Lukas

Sheha Revanur is a the founder of Encode Justice, an international, youth-led network campaigning for the responsible development of AI, which was amo...

#8 Nathan Young on forecasting, AI risk & regulation, and how not to lose your mind on Twitter

21 Apr 2024

Contributed by Lukas

Nathan Young is a forecaster, software developer and tentative AI optimist. In this episode, we discussed how Nathan approaches forecasting, why his p...

#7 Noah Topper helps me understand Eliezer Yudkowsky

10 Apr 2024

Contributed by Lukas

A while back, my self-confessed inability to fully comprehend the writings of Eliezer Yudkowsky elicited the sympathy of the author himself. In an att...

#6 Holly Elmore on pausing AI, protesting, warning shots & more

27 Mar 2024

Contributed by Lukas

Holly Elmore is an AI pause advocate and Executive Director of PauseAI US. We chatted about the case for pausing AI, her experience of organising prot...

#5 Joep Meindertsma on founding PauseAI and strategies for communicating AI risk

22 Feb 2024

Contributed by Lukas

In this episode, I talked with Joep Meindertsma, founder of PauseAI, about how he discovered AI safety, the emotional experience of internalising exis...

#4 Émile P. Torres and I discuss where we agree and disagree on AI safety

20 Feb 2024

Contributed by Lukas

Émile P. Torres is a philosopher and historian known for their research on the history and ethical implications of human extinction. They are also an...

#3 Darren McKee on explaining AI risk to the public & navigating the AI safety debate

29 Jan 2024

Contributed by Lukas

Darren McKee is an author, speaker and policy advisor who has recently penned a beginner-friendly introduction to AI Safety named Uncontrollable: The ...

#1 Aaron Bergman and Max Alexander argue about moral realism while I smile and nod

22 Dec 2023

Contributed by Lukas

In this inaugural episode of Consistently Candid, Aaron Bergman and Max Alexander each try to convince me of their position on moral realism, and I se...