Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

John Schulman

๐Ÿ‘ค Speaker
528 total appearances

Appearances Over Time

Podcast Appearances

Dwarkesh Podcast
John Schulman (OpenAI Cofounder) - Reasoning, RLHF, & Plan for 2027 AGI

So even if the models are good enough to actually run a successful business themselves,

Dwarkesh Podcast
John Schulman (OpenAI Cofounder) - Reasoning, RLHF, & Plan for 2027 AGI

Um, so, uh, yeah, to some extent there might be, uh, choices there.

Dwarkesh Podcast
John Schulman (OpenAI Cofounder) - Reasoning, RLHF, & Plan for 2027 AGI

Um, and, uh, I think people will still have different interests, uh, and what they want to different ideas for what kind of, uh, interesting pursuits they want to direct their AIs at.

Dwarkesh Podcast
John Schulman (OpenAI Cofounder) - Reasoning, RLHF, & Plan for 2027 AGI

And, uh, like they can, people, people could, uh, like, um, you

Dwarkesh Podcast
John Schulman (OpenAI Cofounder) - Reasoning, RLHF, & Plan for 2027 AGI

Yeah, do a lot of, AI doesn't necessarily have an intrinsic, like any kind of intrinsic desire.

Dwarkesh Podcast
John Schulman (OpenAI Cofounder) - Reasoning, RLHF, & Plan for 2027 AGI

Not yet.

Dwarkesh Podcast
John Schulman (OpenAI Cofounder) - Reasoning, RLHF, & Plan for 2027 AGI

Unless we put it in the system.

Dwarkesh Podcast
John Schulman (OpenAI Cofounder) - Reasoning, RLHF, & Plan for 2027 AGI

So I think, so people can still end up being, even if AI's like become extremely capable, I would hope that people are still the drivers of like what the AI's end up doing.

Dwarkesh Podcast
John Schulman (OpenAI Cofounder) - Reasoning, RLHF, & Plan for 2027 AGI

if we wanted to keep, uh, humans in the loop, uh, which seems reasonable.

Dwarkesh Podcast
John Schulman (OpenAI Cofounder) - Reasoning, RLHF, & Plan for 2027 AGI

Um, and, uh, it turned out that, um, firms with any humans in the loop were out competed with by firms that didn't have any humans.

Dwarkesh Podcast
John Schulman (OpenAI Cofounder) - Reasoning, RLHF, & Plan for 2027 AGI

Then I think then you would obviously need some kind of regulation that, uh, like disallowed, um, having no humans in the loop for running a whole company.

Dwarkesh Podcast
John Schulman (OpenAI Cofounder) - Reasoning, RLHF, & Plan for 2027 AGI

Right.

Dwarkesh Podcast
John Schulman (OpenAI Cofounder) - Reasoning, RLHF, & Plan for 2027 AGI

Yeah, you would either have to have every country agree to this regulatory regime, or you would need all of the model infrastructure or the model providers to agree to this kind of requirement.

Dwarkesh Podcast
John Schulman (OpenAI Cofounder) - Reasoning, RLHF, & Plan for 2027 AGI

So it's definitely going to be non-trivial.

Dwarkesh Podcast
John Schulman (OpenAI Cofounder) - Reasoning, RLHF, & Plan for 2027 AGI

So I guess...

Dwarkesh Podcast
John Schulman (OpenAI Cofounder) - Reasoning, RLHF, & Plan for 2027 AGI

Yeah, this is looking a ways ahead, so it's a little hard to imagine this world before seeing anything like it.

Dwarkesh Podcast
John Schulman (OpenAI Cofounder) - Reasoning, RLHF, & Plan for 2027 AGI

Uh, so for example, uh, like there's some questions like, would, uh, are we actually confident that, uh, AI run companies are, uh, better in every way?

Dwarkesh Podcast
John Schulman (OpenAI Cofounder) - Reasoning, RLHF, & Plan for 2027 AGI

Or, uh, do we think they're better most of the time, but occasionally they, um, malfunction because AIs are still like, they're still less sample efficient in certain ways, like dealing with very wacky situations.

Dwarkesh Podcast
John Schulman (OpenAI Cofounder) - Reasoning, RLHF, & Plan for 2027 AGI

So, um,

Dwarkesh Podcast
John Schulman (OpenAI Cofounder) - Reasoning, RLHF, & Plan for 2027 AGI

So actually, AI-run firms have higher tail risk because they're more likely to malfunction in a big way.