Dwarkesh Patel
👤 PersonAppearances Over Time
Podcast Appearances
It's not clear to me that loss of control and loss of understanding are the same things.
A board of directors at, like, whatever, TSMC, Intel, name a random company, they're just, like, prestigious 80-year-olds.
They have very little understanding.
And maybe they don't practically actually have control.
Or, actually, maybe a better example is the president of the United States.
The president has a lot of fucking power.
I'm not trying to make a good statement about the current operant, but maybe I am.
But, like, the actual level of understanding is very different from the level of control.
How come?
I mean, the loss of understanding is obvious, but why a loss of control?
It is not the fact that they are smarter than us that is resulting in a loss of control.
It is the fact that they are competing with each other and whatever arises out of that competition that leads to the loss of control.
Yeah, yeah.
This is a question I should have asked earlier.
So we were talking about how currently it feels like when you're doing AI engineering or AI research, these models are more like in the category of compiler rather than in the category of a replacement.
At some point, if you have quote-unquote AGI, it should be able to do what you do.
And do you feel like having a million copies of You in Parallel results in some huge speed-up of AI progress?
Basically, if that does happen, do you expect to see an intelligence explosion?
Or even once we have a true HA, I'm not talking about LLMs today, but real HA.