Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

Clearer Thinking with Spencer Greenberg

Will AI destroy civilization in the near future? (with Connor Leahy)

21 Jun 2023

Description

Read the full transcript here. Does AI pose a near-term existential risk? Why might existential risks from AI manifest sooner rather than later? Can't we just turn off any AI that gets out of control? Exactly how much do we understand about what's going on inside neural networks? What is AutoGPT? How feasible is it to build an AI system that's exactly as intelligent as a human but no smarter? What is the "CoEm" AI safety proposal? What steps can the average person take to help mitigate risks from AI? Connor Leahy is CEO and co-founder of Conjecture, an AI alignment company focused on making AI systems boundable and corrigible. Connor founded and led EleutherAI, the largest online community dedicated to LLMs, which acted as a gateway for people interested in ML to upskill and learn about alignment. With capabilities increasing at breakneck speed, and our ability to control AI systems lagging far behind, Connor moved on from the volunteer, open-source Eleuther model to a full-time, closed-source model working to solve alignment via Conjecture. Staff Spencer Greenberg — Host / Director Josh Castle — Producer Ryan Kessler — Audio Engineer Uri Bram — Factotum WeAmplify — Transcriptionists Miles Kestran — Marketing Music Broke for Free Josh Woodward Lee Rosevere Quiet Music for Tiny Robots wowamusic zapsplat.com Affiliates Clearer Thinking GuidedTrack Mind Ease Positly UpLift [Read more]

Audio
Featured in this Episode

No persons identified in this episode.

Transcription

This episode hasn't been transcribed yet

Help us prioritize this episode for transcription by upvoting it.

0 upvotes
🗳️ Sign in to Upvote

Popular episodes get transcribed faster

Comments

There are no comments yet.

Please log in to write the first comment.