Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing

Ben Zhao

👤 Person
312 total appearances

Appearances Over Time

Podcast Appearances

Freakonomics Radio
619. How to Poison the A.I. Machine

If you gave users that level of control, then chances are people would choose very different things. Some people might say, I want my cow to be a cat. I want my cow to be the sun rising. If you were to do that, the poison would not be as strong.

Freakonomics Radio
619. How to Poison the A.I. Machine

If you gave users that level of control, then chances are people would choose very different things. Some people might say, I want my cow to be a cat. I want my cow to be the sun rising. If you were to do that, the poison would not be as strong.

Freakonomics Radio
619. How to Poison the A.I. Machine

You probably won't see the effects of Nightshade. If you see it in the wild, models give you wrong answers to things that you're asking for. But the people who are creating these models are not foolish. They are highly trained professionals. So they're going to have lots of testing on any of these models.

Freakonomics Radio
619. How to Poison the A.I. Machine

You probably won't see the effects of Nightshade. If you see it in the wild, models give you wrong answers to things that you're asking for. But the people who are creating these models are not foolish. They are highly trained professionals. So they're going to have lots of testing on any of these models.

Freakonomics Radio
619. How to Poison the A.I. Machine

You probably won't see the effects of Nightshade. If you see it in the wild, models give you wrong answers to things that you're asking for. But the people who are creating these models are not foolish. They are highly trained professionals. So they're going to have lots of testing on any of these models.

Freakonomics Radio
619. How to Poison the A.I. Machine

We would expect that effects of nightshade would actually be detected in the model training process. It'll become a nuisance. And perhaps what really will happen is that certain versions of models post-training will be detected to have certain failures inside them. And perhaps they'll have to roll them back.

Freakonomics Radio
619. How to Poison the A.I. Machine

We would expect that effects of nightshade would actually be detected in the model training process. It'll become a nuisance. And perhaps what really will happen is that certain versions of models post-training will be detected to have certain failures inside them. And perhaps they'll have to roll them back.

Freakonomics Radio
619. How to Poison the A.I. Machine

We would expect that effects of nightshade would actually be detected in the model training process. It'll become a nuisance. And perhaps what really will happen is that certain versions of models post-training will be detected to have certain failures inside them. And perhaps they'll have to roll them back.

Freakonomics Radio
619. How to Poison the A.I. Machine

So I think really that's more likely to cause delays and more likely to cause costs of these model training processes to go up. The AI companies, they really have to work on millions, potentially billions of images. So it's not necessarily the fact that they can't detect nightshade on a particular image.

Freakonomics Radio
619. How to Poison the A.I. Machine

So I think really that's more likely to cause delays and more likely to cause costs of these model training processes to go up. The AI companies, they really have to work on millions, potentially billions of images. So it's not necessarily the fact that they can't detect nightshade on a particular image.

Freakonomics Radio
619. How to Poison the A.I. Machine

So I think really that's more likely to cause delays and more likely to cause costs of these model training processes to go up. The AI companies, they really have to work on millions, potentially billions of images. So it's not necessarily the fact that they can't detect nightshade on a particular image.

Freakonomics Radio
619. How to Poison the A.I. Machine

It's the question of can they detect nightshade on a billion images in a split second with minimal cost? Because any one of those factors that goes up significantly will mean that their operation becomes much, much more expensive. And perhaps it is time to say, well, maybe we'll license artists and get them to give us legitimate images that won't have these questionable things inside them.

Freakonomics Radio
619. How to Poison the A.I. Machine

It's the question of can they detect nightshade on a billion images in a split second with minimal cost? Because any one of those factors that goes up significantly will mean that their operation becomes much, much more expensive. And perhaps it is time to say, well, maybe we'll license artists and get them to give us legitimate images that won't have these questionable things inside them.

Freakonomics Radio
619. How to Poison the A.I. Machine

It's the question of can they detect nightshade on a billion images in a split second with minimal cost? Because any one of those factors that goes up significantly will mean that their operation becomes much, much more expensive. And perhaps it is time to say, well, maybe we'll license artists and get them to give us legitimate images that won't have these questionable things inside them.

Freakonomics Radio
619. How to Poison the A.I. Machine

Yeah. I mean, really, it boils down to that. I came into it not so much thinking about economics as I was just... seeing people that I respected and had affinity for be severely harmed by some of this technology. In whatever way that they can be protected, that's ultimately the goal.

Freakonomics Radio
619. How to Poison the A.I. Machine

Yeah. I mean, really, it boils down to that. I came into it not so much thinking about economics as I was just... seeing people that I respected and had affinity for be severely harmed by some of this technology. In whatever way that they can be protected, that's ultimately the goal.

Freakonomics Radio
619. How to Poison the A.I. Machine

Yeah. I mean, really, it boils down to that. I came into it not so much thinking about economics as I was just... seeing people that I respected and had affinity for be severely harmed by some of this technology. In whatever way that they can be protected, that's ultimately the goal.

Freakonomics Radio
619. How to Poison the A.I. Machine

In that scenario, the outcome would be licensing so that they can actually maintain a livelihood and maintain the vibrancy of that industry.

Freakonomics Radio
619. How to Poison the A.I. Machine

In that scenario, the outcome would be licensing so that they can actually maintain a livelihood and maintain the vibrancy of that industry.

Freakonomics Radio
619. How to Poison the A.I. Machine

In that scenario, the outcome would be licensing so that they can actually maintain a livelihood and maintain the vibrancy of that industry.