Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Blog Pricing

Stephen Dubner

๐Ÿ‘ค Speaker
7195 total appearances

Appearances Over Time

Podcast Appearances

Freakonomics Radio
619. How to Poison the A.I. Machine

The underlying process of creating this AI poison is, as you might imagine, quite complicated. But for an artist who's using Nightshade, who wants to sprinkle a few invisible pixels of poison on their original work, it's pretty straightforward.

Freakonomics Radio
619. How to Poison the A.I. Machine

The underlying process of creating this AI poison is, as you might imagine, quite complicated. But for an artist who's using Nightshade, who wants to sprinkle a few invisible pixels of poison on their original work, it's pretty straightforward.

Freakonomics Radio
619. How to Poison the A.I. Machine

That entirely different thing is not chosen by the user. It's Nightshade that decides whether your image of a cow becomes a 1940s pickup truck versus, say, a cactus. And there's a reason for that.

Freakonomics Radio
619. How to Poison the A.I. Machine

That entirely different thing is not chosen by the user. It's Nightshade that decides whether your image of a cow becomes a 1940s pickup truck versus, say, a cactus. And there's a reason for that.

Freakonomics Radio
619. How to Poison the A.I. Machine

That entirely different thing is not chosen by the user. It's Nightshade that decides whether your image of a cow becomes a 1940s pickup truck versus, say, a cactus. And there's a reason for that.

Freakonomics Radio
619. How to Poison the A.I. Machine

And what do the artificial intelligence companies think about this nightshade being thrown at them? A spokesperson for OpenAI recently described data poison as a type of abuse. AI researchers previously thought that their models were impervious to poisoning attacks. But Ben Zhao says that the AI training models are actually quite easy to fool.

Freakonomics Radio
619. How to Poison the A.I. Machine

And what do the artificial intelligence companies think about this nightshade being thrown at them? A spokesperson for OpenAI recently described data poison as a type of abuse. AI researchers previously thought that their models were impervious to poisoning attacks. But Ben Zhao says that the AI training models are actually quite easy to fool.

Freakonomics Radio
619. How to Poison the A.I. Machine

And what do the artificial intelligence companies think about this nightshade being thrown at them? A spokesperson for OpenAI recently described data poison as a type of abuse. AI researchers previously thought that their models were impervious to poisoning attacks. But Ben Zhao says that the AI training models are actually quite easy to fool.

Freakonomics Radio
619. How to Poison the A.I. Machine

His free Nightshade app has been downloaded over 2 million times. So it's safe to say that plenty of images have already been shaded. But how can you tell if Nightshade is actually working?

Freakonomics Radio
619. How to Poison the A.I. Machine

His free Nightshade app has been downloaded over 2 million times. So it's safe to say that plenty of images have already been shaded. But how can you tell if Nightshade is actually working?

Freakonomics Radio
619. How to Poison the A.I. Machine

His free Nightshade app has been downloaded over 2 million times. So it's safe to say that plenty of images have already been shaded. But how can you tell if Nightshade is actually working?

Freakonomics Radio
619. How to Poison the A.I. Machine

Is it the case that your primary motivation here really was an economic one of getting producers of labor, in this case artists, simply to be paid for their work, that their work was being stolen?

Freakonomics Radio
619. How to Poison the A.I. Machine

Is it the case that your primary motivation here really was an economic one of getting producers of labor, in this case artists, simply to be paid for their work, that their work was being stolen?

Freakonomics Radio
619. How to Poison the A.I. Machine

Is it the case that your primary motivation here really was an economic one of getting producers of labor, in this case artists, simply to be paid for their work, that their work was being stolen?

Freakonomics Radio
619. How to Poison the A.I. Machine

When you say these are people you respect and have affinity for, I'm guessing you being an academic computer scientist, that you also have respect and affinity for, and I'm sure you know many people in the AI machine learning community on the firm side though, right?

Freakonomics Radio
619. How to Poison the A.I. Machine

When you say these are people you respect and have affinity for, I'm guessing you being an academic computer scientist, that you also have respect and affinity for, and I'm sure you know many people in the AI machine learning community on the firm side though, right?

Freakonomics Radio
619. How to Poison the A.I. Machine

When you say these are people you respect and have affinity for, I'm guessing you being an academic computer scientist, that you also have respect and affinity for, and I'm sure you know many people in the AI machine learning community on the firm side though, right?

Freakonomics Radio
619. How to Poison the A.I. Machine

Zhao is talking here about Suchir Balaji, a 26-year-old former researcher at OpenAI, the firm best known for creating ChatGPT. Balaji died by apparent suicide in his apartment in San Francisco. He had publicly charged OpenAI with potential copyright violations, and he left the company because of ethical concerns.

Freakonomics Radio
619. How to Poison the A.I. Machine

Zhao is talking here about Suchir Balaji, a 26-year-old former researcher at OpenAI, the firm best known for creating ChatGPT. Balaji died by apparent suicide in his apartment in San Francisco. He had publicly charged OpenAI with potential copyright violations, and he left the company because of ethical concerns.

Freakonomics Radio
619. How to Poison the A.I. Machine

Zhao is talking here about Suchir Balaji, a 26-year-old former researcher at OpenAI, the firm best known for creating ChatGPT. Balaji died by apparent suicide in his apartment in San Francisco. He had publicly charged OpenAI with potential copyright violations, and he left the company because of ethical concerns.