Ryan Kidd
๐ค SpeakerAppearances Over Time
Podcast Appearances
The problem is, I think they all get out-competed by
agentic AI that you stack like an AI company full of agents and they all go out in the stock market and make products and so on and just make more money and just beat your crappy narrow AI solutions.
So the problem is it's not just about making AI that is aligned, it's about making AI that is performance competitive enough that it dominates in the marketplace.
The only alternative is to like have some sort of draconian like shut it all down kind of thing, which I am just very skeptical of ever working.
I don't see any example of such a thing happening.
The closest example we have is like stopping human cloning.
But that was not a lucrative bet, like in the same way that AGI is, I claim.
So it's just, yeah.
And also like human cloning is kind of this, it violates this deep social more, I think in a way that few people today conceive of powerful AI systems to violate.
I think they're wrong.
I think building a second species is actually gonna violate like some deep social more in the same way that human cloning would be.
but I don't think people will see it that way.
So that leaves us with the fact that we actually have to build the AGI, but if we can build products that are
safer, or perhaps are under some strict regulatory control, that we have some really ideally 10-year international slow-phased entry to the new AGI world, where all these countries and companies are forced to be very careful and collaborative in the way that they align their models, then we're in a much better world.
That's the world I hope for.
Okay.
Now, as to whether AI safety research is unnecessarily capabilities enhancing, some is, perhaps.
ROHF, I think, I'm on the fence.
50-50.
Definitely at some point, ROHF was, like, the idea was in the water.