Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing
Podcast Image

Dwarkesh Podcast

Demis Hassabis - Scaling, Superhuman AIs, AlphaZero atop LLMs, Rogue Nations Threat

28 Feb 2024

1h 1m duration
12023 words
2 speakers
28 Feb 2024
Description

Here is my episode with Demis Hassabis, CEO of Google DeepMindWe discuss:* Why scaling is an artform* Adding search, planning, & AlphaZero type training atop LLMs* Making sure rogue nations can't steal weights* The right way to align superhuman AIs and do an intelligence explosionWatch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Timestamps(0:00:00) - Nature of intelligence(0:05:56) - RL atop LLMs(0:16:31) - Scaling and alignment(0:24:13) - Timelines and intelligence explosion(0:28:42) - Gemini training(0:35:30) - Governance of superhuman AIs(0:40:42) - Safety, open source, and security of weights(0:47:00) - Multimodal and further progress(0:54:18) - Inside Google DeepMind Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

Audio
Featured in this Episode
Transcription

Full Episode

0.031 - 16.157 Demis Hassabis

So I wouldn't be surprised if we had HEI-like systems within the next decade. It was pretty surprising to almost everyone, including the people who first worked on the scaling hypotheses, that how far it's gone. In a way, I look at the large models today and I think they're almost unreasonably effective for what they are.

0

16.297 - 19.663 Demis Hassabis

It's an empirical question whether that will hit an asymptote or a brick wall.

0

19.883 - 25.333 Dwarkesh Patel

I think no one knows. When you think about superhuman intelligence, is it still controlled by a private company?

0

25.834 - 43.504 Demis Hassabis

As Gemini are becoming more multimodal and we start ingesting audiovisual data as well as text data, I do think our systems are going to start to understand the physics of the real world better. The world's about to become very exciting, I think, in the next few years as we start getting used to the idea of what true multimodality means.

0

44.294 - 64.918 Dwarkesh Patel

Okay, today it is a true honor to speak with Demis Sassavis, who is the CEO of DeepMind. Demis, welcome to the podcast. Thanks for having me. First question, given your neuroscience background, how do you think about intelligence? Specifically, do you think it's like one higher level general reasoning circuit, or do you think it's thousands of independent sub-skills and heuristics?

65.54 - 86.852 Demis Hassabis

Well, it's interesting because intelligence is so broad and, you know, what we use it for is so sort of generally applicable. I think that suggests that, you know, there must be some sort of high level common things in, you know, common kind of algorithmic themes. I think, around how the brain processes the world around us.

87.613 - 97.554 Demis Hassabis

Of course, then there are specialized parts of the brain that do specific things, but I think there are probably some underlying principles that underpin all of that.

97.995 - 111.354 Dwarkesh Patel

Yeah. How do you make sense of the fact that in these LLMs though, When you give them a lot of data in any specific domain, they tend to get asymmetrically better in that domain. Wouldn't we expect a general improvement across all the different areas?

111.414 - 130.478 Demis Hassabis

Well, first of all, I think you do actually sometimes get surprising improvement in other domains when you improve in a specific domain. For example, when these large models improve at coding, that can actually improve their general reasoning. There is some evidence of some transfer, although I think we would like a lot more evidence of that.

Comments

There are no comments yet.

Please log in to write the first comment.