Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Daniel Kokotajlo

๐Ÿ‘ค Speaker
See mentions of this person in podcasts
617 total appearances

Appearances Over Time

Podcast Appearances

Dwarkesh Podcast
2027 Intelligence Explosion: Month-by-Month Model โ€” Scott Alexander & Daniel Kokotajlo

Yeah.

Dwarkesh Podcast
2027 Intelligence Explosion: Month-by-Month Model โ€” Scott Alexander & Daniel Kokotajlo

Is this good or bad?

Dwarkesh Podcast
2027 Intelligence Explosion: Month-by-Month Model โ€” Scott Alexander & Daniel Kokotajlo

That the president and the companies are like... I think it's bad.

Dwarkesh Podcast
2027 Intelligence Explosion: Month-by-Month Model โ€” Scott Alexander & Daniel Kokotajlo

So if the big bottleneck to the good future here is just putting in not this Eliezer-type galaxy brain, high volatility, you know, there's a 1% chance this works, but we've got to come up with this crazy scheme in order to make alignment work.

Dwarkesh Podcast
2027 Intelligence Explosion: Month-by-Month Model โ€” Scott Alexander & Daniel Kokotajlo

But rather, as Daniel, you were saying, more like, hey, do the obvious thing of making sure you can read how the AI is thinking.

Dwarkesh Podcast
2027 Intelligence Explosion: Month-by-Month Model โ€” Scott Alexander & Daniel Kokotajlo

Make sure you're monitoring the AIs.

Dwarkesh Podcast
2027 Intelligence Explosion: Month-by-Month Model โ€” Scott Alexander & Daniel Kokotajlo

Make sure they're not forming some sort of hive mind where you can't really understand how the millions of them are coordinating with each other.

Dwarkesh Podcast
2027 Intelligence Explosion: Month-by-Month Model โ€” Scott Alexander & Daniel Kokotajlo

To the extent that, and I want to stay short forward, but to the extent that it is a matter of prioritizing it, closing all the obvious loopholes, it does make sense to leave it in the hands of people who have at least said that this is a thing that's worth doing, have been thinking about it for a while.

Dwarkesh Podcast
2027 Intelligence Explosion: Month-by-Month Model โ€” Scott Alexander & Daniel Kokotajlo

And I worry about...

Dwarkesh Podcast
2027 Intelligence Explosion: Month-by-Month Model โ€” Scott Alexander & Daniel Kokotajlo

One of the questions I was planning on asking you is, look, during โ€“ one of my friends made this interesting point that during COVID, our community, less wrong whatever, were the first people on Mars to be saying this is a big deal, this is coming.

Dwarkesh Podcast
2027 Intelligence Explosion: Month-by-Month Model โ€” Scott Alexander & Daniel Kokotajlo

But there were also the people who are saying we've got to do the lockdowns now, they've got to be stringent and so forth.

Dwarkesh Podcast
2027 Intelligence Explosion: Month-by-Month Model โ€” Scott Alexander & Daniel Kokotajlo

At least some of them were.

Dwarkesh Podcast
2027 Intelligence Explosion: Month-by-Month Model โ€” Scott Alexander & Daniel Kokotajlo

And in retrospect, I think according to even their own views about what should have happened, they would say, actually, we were right about COVID, but we were wrong about lockdowns.

Dwarkesh Podcast
2027 Intelligence Explosion: Month-by-Month Model โ€” Scott Alexander & Daniel Kokotajlo

In fact, we should, lockdowns were on net negative or something.

Dwarkesh Podcast
2027 Intelligence Explosion: Month-by-Month Model โ€” Scott Alexander & Daniel Kokotajlo

I wonder what the equivalent for the AI safety community will be with respect to they saw AI coming, AGI coming sooner, they saw ASI coming.

Dwarkesh Podcast
2027 Intelligence Explosion: Month-by-Month Model โ€” Scott Alexander & Daniel Kokotajlo

What will they, in retrospect, regret?

Dwarkesh Podcast
2027 Intelligence Explosion: Month-by-Month Model โ€” Scott Alexander & Daniel Kokotajlo

My answer, just based on this initial discussion, seems to be nationalization.

Dwarkesh Podcast
2027 Intelligence Explosion: Month-by-Month Model โ€” Scott Alexander & Daniel Kokotajlo

Not only because it puts in...

Dwarkesh Podcast
2027 Intelligence Explosion: Month-by-Month Model โ€” Scott Alexander & Daniel Kokotajlo

it sort of deprioritizes the people who want to think about safety and more maybe prioritizes the national security state probably cares more about winning against China than making sure the chain of thought is interpretable.

Dwarkesh Podcast
2027 Intelligence Explosion: Month-by-Month Model โ€” Scott Alexander & Daniel Kokotajlo

And so you're just reducing the leverage of the people who care more about safety.