Alexandr (Alex) Wang
👤 PersonAppearances Over Time
Podcast Appearances
You know... Right now, probably yes.
It's definitely possible in the future that you will be able to effectively hack into a system or somehow poison an AI system and have that activity be relatively untraceable.
Because you would basically hack into that AI system.
So there's two ways you would do it.
One is you poison the data that goes into that AI.
I'm not hacking into the AI itself.
I'm just poisoning all the data that's feeding into that AI such that at any moment in the future, I can activate that AI and basically hack it without any sort of active intrusion.
But I can just do it because I've poisoned the data that goes into the AI such that if I like, you know,
So, so data poisoning is going to is, but this is what's so terrifying about deep seek.
One of the reasons why deep seek is really scary is, um, uh,
China chose to open source the model, right?
So there's a lot of corporates, large scale corporates in the United States that have chosen to use DeepSeek because they're like, oh, it's a good model and it's a good AI and it's free.
But DeepSeq itself as a model could already be compromised, could already be poisoned in some way such that there are characteristics or behavior or ways to activate DeepSeq that
the CCP and the PLA know about that we don't.
So that's why deep-seek is scary.
So the first area is just data poisoning.