Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing
Podcast Image

Dwarkesh Podcast

Sholto Douglas & Trenton Bricken - How to Build & Understand GPT-7's Mind

28 Mar 2024

Transcription

Full Episode

0.301 - 7.315 Sholto Douglas

Okay, today I have the pleasure to talk with two of my good friends, Shilto and Trenton. Shilto.

0

7.355 - 9.379 Unknown

You should have asked me to stop.

0

9.439 - 20.52 Sholto Douglas

I wasn't going to say anything. Let's do this in reverse. How long have I started with my good friends?

0

20.804 - 25.2 Unknown

Yeah, Gemini 1.5, the context, like, just wow.

0

25.802 - 53.026 Sholto Douglas

Shit. Anyways, Shoto, Noah Brown... Noam Brown, the guy who wrote the diplomacy paper, he said this about Shilto. He said, he's only been in the field for 1.5 years, but people in AI know that he was one of the most important people behind Gemini's success. And Trenton, who's an anthropic, works on mechanistic interoperability, and it was widely reported that he has solved alignment.

55.149 - 57.212 Unknown

With his recent paper on Twitter.

58.895 - 68.396 Sholto Douglas

Yeah. So this will be a capabilities-only podcast. Alignment is already solved, so no need to discuss further. Okay, so let's start by talking about context links.

68.416 - 68.496

Yep.

70.096 - 86.231 Sholto Douglas

It seemed to be underhyped given how important it seems to me to be that you can just put a million tokens into context. There's apparently some other news that got pushed to the front for some reason. But yeah, tell me about how you see the future of long context links and what that implies for these models.

Comments

There are no comments yet.

Please log in to write the first comment.