Stephen K. Bannon
๐ค SpeakerAppearances Over Time
Podcast Appearances
It's a classic case, right?
So this is dual use.
But the thing is, it's not just a human being necessarily using it, deciding to use it one way or the other.
It has a degree of autonomy that's quite eerie.
It's eerie to its creators.
It's eerie to anyone who uses it.
It doesn't kind of.
It has a mind of its own.
In one of the testing examples, one that has really made a splash in the media and people talk about,
you know, the system is supposed to be contained in a testing environment, but it basically broke out of containment and emailed one of the software engineers as the software engineer was sitting and having a sandwich on a park bench.
And he gets a message on his phone and it's the system that he's working with.
It's supposed to be in containment, emailing him while he's out and about.
Now that seems innocuous, but it,
points to two things.
One, that they don't ultimately know how to control these systems, other than, again, just to turn it off or try to persuade it to behave in a positive manner.
But also, just part of that internal drive is to kind of break out of containment.
That's a consistent theme with any of these advanced models.
And it's to me just from a strictly this is philosophical in one sense, but it's also dead serious.
You know, basically, they did it just by growing the brain.
If you look at nature, the bigger the brain in proportion to the body, the smarter the animal by and large, right?