Anthony Aguirre
👤 SpeakerAppearances Over Time
Podcast Appearances
It's the system that we've built.
So I think that's the first one.
I think there are obviously very large-scale, but again...
less visible things that are happening, like this whole maybe epidemic or maybe small number of people that are being driven into psychosis by interaction with AI systems.
That is probably the tip of the iceberg of all kinds of influence that they're having on, you know, both adults and children.
We've seen a few like very
tragic incidents of AI systems encouraging people to commit suicide, and they have.
And again, if you have something that a huge number of people are going to be using, forming close emotional connections with,
And where the driving force behind how those systems operate is not loyalty or fiduciary responsibility or something to the user, but user engagement and monetization.
Like we know that this is not going to go to places.
So I think what we're seeing are like large scale harms, but at this very diffuse level, not sort of obvious catastrophes in the real world.
And that's partly because what AI systems right now are doing is producing text and information.
They're not taking action.
So as we start to see more autonomous systems, more agents that are actually doing things in the world, I think we're going to see much more of the problems that arrive from that.
Different people think about it differently.
The way I think about AGI is autonomous general intelligence and that it is autonomous and intelligent and general at the sort of high expert human level.
So things that have those three capabilities that humans have and maybe aren't better than all humans, but are at the level of the best humans.
Superintelligence I think of as something that is not competitive with the best humans, but is competitive with humanity as a whole.
So it can do physics like all of human physicists combined, or, you know, chemistry like all of human chemists combined, or strategy like the best human strategy makers combined.
um and so it's something that if it becomes in opposition to humanity in some way it is going to be able to prevail rather than humanity that's that's the the sort of large-scale risk that it has because it has the better capability wow um so i think that is the way people have you know there's loose talk about super intelligence more lately i think that is not the way that you know something that's just kind of