Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing
Podcast Image

The Inside View

Technology

Activity Overview

Episode publication activity over the past year

Episodes

Owain Evans - AI Situational Awareness, Out-of-Context Reasoning

23 Aug 2024

Contributed by Lukas

Owain Evans is an AI Alignment researcher, research associate at the Center of Human Compatible AI at UC Berkeley, and now leading a new AI safety r...

[Crosspost] Adam Gleave on Vulnerabilities in GPT-4 APIs (+ extra Nathan Labenz interview)

17 May 2024

Contributed by Lukas

This is a special crosspost episode where Adam Gleave is interviewed by Nathan Labenz from the Cognitive Revolution. At the end I also have a discussi...

Ethan Perez on Selecting Alignment Research Projects (ft. Mikita Balesni & Henry Sleight)

09 Apr 2024

Contributed by Lukas

Ethan Perez is a Research Scientist at Anthropic, where he leads a team working on developing model organisms of misalignment. Youtube: ⁠https://yo...

Emil Wallner on Sora, Generative AI Startups and AI optimism

20 Feb 2024

Contributed by Lukas

Emil is the co-founder of palette.fm (colorizing B&W pictures with generative AI) and was previously working in deep learning for Google Arts &amp...

Evan Hubinger on Sleeper Agents, Deception and Responsible Scaling Policies

12 Feb 2024

Contributed by Lukas

Evan Hubinger leads the Alignment stress-testing at Anthropic and recently published "Sleeper Agents: Training Deceptive LLMs That Persist Throug...

[Jan 2023] Jeffrey Ladish on AI Augmented Cyberwarfare and compute monitoring

27 Jan 2024

Contributed by Lukas

Jeffrey Ladish is the Executive Director of Palisade Research which aimes so "study the offensive capabilities or AI systems today to better unde...

Holly Elmore on pausing AI

22 Jan 2024

Contributed by Lukas

Holly Elmore is an AI Pause Advocate who has organized two protests in the past few months (against Meta's open sourcing of LLMs and before the UK...

Podcast Retrospective and Next Steps

09 Jan 2024

Contributed by Lukas

https://youtu.be/Fk2MrpuWinc

Paul Christiano's views on "doom" (ft. Robert Miles)

29 Sep 2023

Contributed by Lukas

Youtube: https://youtu.be/JXYcLQItZsk Paul Christiano's post: https://www.lesswrong.com/posts/xWMqsvHapP3nwdSW8/my-views-on-doom

Neel Nanda on mechanistic interpretability, superposition and grokking

21 Sep 2023

Contributed by Lukas

Neel Nanda is a researcher at Google DeepMind working on mechanistic interpretability. He is also known for his YouTube channel where he explains what...

Joscha Bach on how to stop worrying and love AI

08 Sep 2023

Contributed by Lukas

Joscha Bach (who defines himself as an AI researcher/cognitive scientist) has recently been debating existential risk from AI with Connor Leahy (previ...

Erik Jones on Automatically Auditing Large Language Models

11 Aug 2023

Contributed by Lukas

Erik is a Phd at Berkeley working with Jacob Steinhardt, interested in making generative machine learning systems more robust, reliable, and aligned, ...

Dylan Patel on the GPU Shortage, Nvidia and the Deep Learning Supply Chain

09 Aug 2023

Contributed by Lukas

Dylan Patel is Chief Analyst at SemiAnalysis a boutique semiconductor research and consulting firm specializing in the semiconductor supply chain fro...

Tony Wang on Beating Superhuman Go AIs with Advesarial Policies

04 Aug 2023

Contributed by Lukas

Tony is a PhD student at MIT, and author of "Advesarial Policies Beat Superhuman Go AIs", accepted as Oral at the International Conference o...

David Bau on Editing Facts in GPT, AI Safety and Interpretability

01 Aug 2023

Contributed by Lukas

David Bau is an Assistant Professor studying the structure and interpretation of deep networks, and the co-author on "Locating and Editing Factua...

Alexander Pan on the MACHIAVELLI benchmark

26 Jul 2023

Contributed by Lukas

I've talked to Alexander Pan, 1st year at Berkeley working with Jacob Steinhardt about his paper "Measuring Trade-Offs Between Rewards and Et...

Vincent Weisser on Funding AI Alignment Research

24 Jul 2023

Contributed by Lukas

Vincent is currently spending his time supporting AI alignment efforts, as well as investing across AI, semi, energy, crypto, bio and deeptech. His mi...

[JUNE 2022] Aran Komatsuzaki on Scaling, GPT-J and Alignment

19 Jul 2023

Contributed by Lukas

Aran Komatsuzaki is a ML PhD student at GaTech and lead researcher at EleutherAI where he was one of the authors on GPT-J. In June 2022 we recorded an...

Curtis Huebner on Doom, AI Timelines and Alignment at EleutherAI

16 Jul 2023

Contributed by Lukas

Curtis, also known on the internet as AI_WAIFU, is the head of Alignment at EleutherAI. In this episode we discuss the massive orders of H100s from di...

Eric Michaud on scaling, grokking and quantum interpretability

12 Jul 2023

Contributed by Lukas

Eric is a PhD student in the Department of Physics at MIT working with Max Tegmark on improving our scientific/theoretical understanding of deep learn...

Jesse Hoogland on Developmental Interpretability and Singular Learning Theory

06 Jul 2023

Contributed by Lukas

Jesse Hoogland is a research assistant at David Krueger's lab in Cambridge studying AI Safety. More recently, Jesse has been thinking about Singul...

Clarifying and predicting AGI by Richard Ngo

09 May 2023

Contributed by Lukas

Explainer podcast for Richard Ngo's "Clarifying and predicting AGI" post on Lesswrong, which introduces the t-AGI framework to evaluate ...

Alan Chan And Max Kauffman on Model Evaluations, Coordination and AI Safety

06 May 2023

Contributed by Lukas

Max Kaufmann⁠ and Alan Chan discuss the evaluation of large language models, AI Governance and more generally the impact of the deployment of founda...

Breandan Considine on Neuro Symbolic AI, Coding AIs and AI Timelines

04 May 2023

Contributed by Lukas

Breandan Considine is a PhD student at the School of Computer Science at McGill University, under the supervision of Jin Guo and Xujie Si). There, he ...

Christoph Schuhmann on Open Source AI, Misuse and Existential risk

01 May 2023

Contributed by Lukas

Christoph Schuhmann is the co-founder and organizational lead at LAION, the non-profit who released LAION-5B, a dataset of 5,85 billion CLIP-filtered ...

Simeon Campos on Short Timelines, AI Governance and AI Alignment Field Building

29 Apr 2023

Contributed by Lukas

Siméon Campos is the founder of EffiSciences and SaferAI, mostly focusing on alignment field building and AI Governance. More recently, he started th...

Collin Burns On Discovering Latent Knowledge In Language Models Without Supervision

17 Jan 2023

Contributed by Lukas

Collin Burns is a second-year ML PhD at Berkeley, working with Jacob Steinhardt on making language models honest, interpretable, and aligned. In 2015 ...

Victoria Krakovna–AGI Ruin, Sharp Left Turn, Paradigms of AI Alignment

12 Jan 2023

Contributed by Lukas

Victoria Krakovna is a Research Scientist at DeepMind working on AGI safety and a co-founder of the Future of Life Institute, a non-profit organizatio...

David Krueger–Coordination, Alignment, Academia

07 Jan 2023

Contributed by Lukas

David Krueger is an assistant professor at the University of Cambridge and got his PhD from Mila. His research group focuses on aligning deep learning...

Ethan Caballero–Broken Neural Scaling Laws

03 Nov 2022

Contributed by Lukas

Ethan Caballero is a PhD student at Mila interested in how to best scale Deep Learning models according to all downstream evaluations that matter. He ...

Irina Rish–AGI, Scaling and Alignment

18 Oct 2022

Contributed by Lukas

Irina Rish a professor at the Université de Montréal, a core member of Mila (Quebec AI Institute), and the organizer of the neural scaling laws work...

Shahar Avin–Intelligence Rising, AI Governance

23 Sep 2022

Contributed by Lukas

Shahar is a senior researcher at the Center for the Study of Existential Risk in Cambridge. In his past life, he was a Google Engineer, though right n...

Katja Grace on Slowing Down AI, AI Expert Surveys And Estimating AI Risk

16 Sep 2022

Contributed by Lukas

Katja runs AI Impacts, a research project trying to incrementally answer decision-relevant questions about the future of AI. She is well known for a s...

Markus Anderljung–AI Policy

09 Sep 2022

Contributed by Lukas

Markus Anderljung is the Head of AI Policy at the Centre for Governance of AI  in Oxford and was previously seconded to the UK government office ...

Alex Lawsen—Forecasting AI Progress

06 Sep 2022

Contributed by Lukas

Alex Lawsen is an advisor at 80,000 hours, released an Introduction to Forecasting Youtube Series and has recently been thinking about forecasting AI ...

Robert Long–Artificial Sentience

28 Aug 2022

Contributed by Lukas

Robert Long is a research fellow at the Future of Humanity Institute. His work is at the intersection of the philosophy of AI Safety and consciousness...

Ethan Perez–Inverse Scaling, Language Feedback, Red Teaming

24 Aug 2022

Contributed by Lukas

Ethan Perez is a research scientist at Anthropic, working on large language models. He is the second Ethan working with large language models coming o...

Robert Miles–Youtube, AI Progress and Doom

19 Aug 2022

Contributed by Lukas

Robert Miles has been making videos for Computerphile, then decided to create his own Youtube channel about AI Safety. Lately, he's been working on &n...

Connor Leahy–EleutherAI, Conjecture

22 Jul 2022

Contributed by Lukas

Connor was the first guest of this podcast. In the last episode, we talked a lot about EleutherAI, a grassroot collective of researchers he co-founded...

Raphaël Millière Contra Scaling Maximalism

24 Jun 2022

Contributed by Lukas

Raphaël Millière is a Presidential Scholar in Society and Neuroscience at Columbia University. He has previously completed a PhD in philosophy in Ox...

Blake Richards–AGI Does Not Exist

14 Jun 2022

Contributed by Lukas

Blake Richards is an Assistant Professor in the Montreal Neurological Institute and the School of Computer Science at McGill University and a Core Fac...

Ethan Caballero–Scale is All You Need

05 May 2022

Contributed by Lukas

Ethan is known on Twitter as the edgiest person at MILA. We discuss all the gossips around scaling large language models in what will be later known a...

10. Peter Wildeford on Forecasting

13 Apr 2022

Contributed by Lukas

Peter is the co-CEO of Rethink Priorities, a fast-growing non-profit doing research on how to improve the long-term future. On his free time, Peter ma...

9. Emil Wallner on Building a €25000 Machine Learning Rig

23 Mar 2022

Contributed by Lukas

Emil is a resident at the Google Arts & Culture Lab were he explores the intersection between art and machine learning. He recently built his own ...

8. Sonia Joseph on NFTs, Web 3 and AI Safety

22 Dec 2021

Contributed by Lukas

Sonia is a graduate student applying ML to neuroscience at MILA. She was previously applying deep learning to neural data at Janelia, an NLP research ...

7. Phil Trammell on Economic Growth under Transformative AI

24 Oct 2021

Contributed by Lukas

Phil Trammell is an Oxford PhD student in economics and research associate at the Global Priorities Institute. Phil is one of the smartest person I kn...

6. Slava Bobrov on Brain Computer Interfaces

06 Oct 2021

Contributed by Lukas

In this episode I discuss Brain Computer Interfaces with Slava Bobrov, a self-taught Machine Learning Engineer applying AI to neural biosignals to con...

5. Charlie Snell on DALL-E and CLIP

16 Sep 2021

Contributed by Lukas

We talk about AI generated art with Charlie Snell, a Berkeley student who wrote extensively about AI art for ML@Berkeley's blog (https://ml.berkeley.e...

4. Sav Sidorov on Learning, Contrarianism and Robotics

05 Sep 2021

Contributed by Lukas

I interview Sav Sidorov about top-down learning, contrarianism, religion, university, robotics, ego , education, twitter, friends, psychedelics, B-val...

3. Evan Hubinger on Takeoff speeds, Risks from learned optimization & Interpretability

08 Jun 2021

Contributed by Lukas

We talk about Evan’s background @ MIRI & OpenAI, Coconut, homogeneity in AI takeoff, reproducing SoTA & openness in multipolar scenarios, qu...

2. Connor Leahy on GPT3, EleutherAI and AI Alignment

04 May 2021

Contributed by Lukas

In the first part of the podcast we chat about how to speed up GPT-3 training, how Conor updated on recent announcements of large language models, why...

1. Does the world really need another podcast?

25 Apr 2021

Contributed by Lukas

In this first episode I'm the one being interviewed. Questions: - Does the world really needs another podcast? - Why call your podcast superintellige...