Joanna Stern
๐ค SpeakerAppearances Over Time
Podcast Appearances
This is really great on one hand, that they are able to find exploits and vulnerabilities that have been around forever that they never would have found before.
On the other hand, terrifying and bad because now...
AI could just use those exploits to hack us all.
And so this is significantly powerful and so powerful that Anthropica said, hey, we don't want to release this to the public yet.
We're only going to release it to people we think that will do good with it to protect our operating systems and protect the public.
And OpenAI has even said that they're going to do the same with one of their next models.
This is going to be sort of a thing that starts to happen in the AI industry where the models don't go out immediately to the public.
They go to these kind of security researchers.
They go to specific companies to test them before they go out to patch things like this.
We're seeing it right now.
We're seeing it across the board with other things that they're working on, right?
Things that Anthropic has said, we do not want to put this in the hands as we saw with the DOJ.
We do not want to have permissions around using our technology for mass surveillance.
We don't want to have our technology used for creating weapons of mass destruction, right?
And so the question is going to be more around policy and safeguards that we as humans put in place than the progress of this kind of technology, which is a very scary place to be.
Yeah, it's like, it's a great marketing that you're made the most powerful AI that can bring down computer systems all over the world.
Doesn't everyone want that model?
Yay.
You could enter the markets, but yeah, you have money while you're doing it.
I think it's similar to what we're seeing in other industries, especially around coding right now, where humans are managing AI agents.