Anthony Aguirre
👤 SpeakerAppearances Over Time
Podcast Appearances
Is there space in between that and superintelligence that we can, like, stop and think, are we going to go forward?
Are we going to stop here for a while and go down some other routes?
Or do we really have to avoid building autonomous general intelligence, which is the way I think about it at all?
I think it's far safer to, like...
My preferred solution is the AI tools that we're building, powerful tools that actually let people do things that they otherwise couldn't do, that supercharge productivity, that make scientific progress go faster, but that aren't autonomous.
Those things are great.
Let's lean into those.
if we're talking about building autonomous general intelligences that can do all of the things and operate without human oversight or control, let's wait on that until we can prove that they're going to be safe and we can prove that they're going to be controllable.
And if that takes a short amount of time, that takes a short amount of time.
I don't think it will.
If it takes a really long time, that's a long time we should wait.
When we're evaluating some new medication, we don't say,
Well, we've got a year to decide whether this medication or a month or a day to decide whether this medication is safe or not.
And then if we can't figure it out, we're just going to give it to everybody.
That's not what we do.
We say, you know, take as long as you need.
Show us that your medication is safe.
And then if you show that it's safe, then you can give it to people.
Makes sense.
So we, you know, we treat pretty much every other industry like this, that AI has got this sort of exceptionalism that we can build these things that are potentially incredibly unsafe.