Stephen Dubner
๐ค SpeakerAppearances Over Time
Podcast Appearances
The underlying process of creating this AI poison is, as you might imagine, quite complicated. But for an artist who's using Nightshade, who wants to sprinkle a few invisible pixels of poison on their original work, it's pretty straightforward.
The underlying process of creating this AI poison is, as you might imagine, quite complicated. But for an artist who's using Nightshade, who wants to sprinkle a few invisible pixels of poison on their original work, it's pretty straightforward.
That entirely different thing is not chosen by the user. It's Nightshade that decides whether your image of a cow becomes a 1940s pickup truck versus, say, a cactus. And there's a reason for that.
That entirely different thing is not chosen by the user. It's Nightshade that decides whether your image of a cow becomes a 1940s pickup truck versus, say, a cactus. And there's a reason for that.
That entirely different thing is not chosen by the user. It's Nightshade that decides whether your image of a cow becomes a 1940s pickup truck versus, say, a cactus. And there's a reason for that.
And what do the artificial intelligence companies think about this nightshade being thrown at them? A spokesperson for OpenAI recently described data poison as a type of abuse. AI researchers previously thought that their models were impervious to poisoning attacks. But Ben Zhao says that the AI training models are actually quite easy to fool.
And what do the artificial intelligence companies think about this nightshade being thrown at them? A spokesperson for OpenAI recently described data poison as a type of abuse. AI researchers previously thought that their models were impervious to poisoning attacks. But Ben Zhao says that the AI training models are actually quite easy to fool.
And what do the artificial intelligence companies think about this nightshade being thrown at them? A spokesperson for OpenAI recently described data poison as a type of abuse. AI researchers previously thought that their models were impervious to poisoning attacks. But Ben Zhao says that the AI training models are actually quite easy to fool.
His free Nightshade app has been downloaded over 2 million times. So it's safe to say that plenty of images have already been shaded. But how can you tell if Nightshade is actually working?
His free Nightshade app has been downloaded over 2 million times. So it's safe to say that plenty of images have already been shaded. But how can you tell if Nightshade is actually working?
His free Nightshade app has been downloaded over 2 million times. So it's safe to say that plenty of images have already been shaded. But how can you tell if Nightshade is actually working?
Is it the case that your primary motivation here really was an economic one of getting producers of labor, in this case artists, simply to be paid for their work, that their work was being stolen?
Is it the case that your primary motivation here really was an economic one of getting producers of labor, in this case artists, simply to be paid for their work, that their work was being stolen?
Is it the case that your primary motivation here really was an economic one of getting producers of labor, in this case artists, simply to be paid for their work, that their work was being stolen?
When you say these are people you respect and have affinity for, I'm guessing you being an academic computer scientist, that you also have respect and affinity for, and I'm sure you know many people in the AI machine learning community on the firm side though, right?
When you say these are people you respect and have affinity for, I'm guessing you being an academic computer scientist, that you also have respect and affinity for, and I'm sure you know many people in the AI machine learning community on the firm side though, right?
When you say these are people you respect and have affinity for, I'm guessing you being an academic computer scientist, that you also have respect and affinity for, and I'm sure you know many people in the AI machine learning community on the firm side though, right?
Zhao is talking here about Suchir Balaji, a 26-year-old former researcher at OpenAI, the firm best known for creating ChatGPT. Balaji died by apparent suicide in his apartment in San Francisco. He had publicly charged OpenAI with potential copyright violations, and he left the company because of ethical concerns.
Zhao is talking here about Suchir Balaji, a 26-year-old former researcher at OpenAI, the firm best known for creating ChatGPT. Balaji died by apparent suicide in his apartment in San Francisco. He had publicly charged OpenAI with potential copyright violations, and he left the company because of ethical concerns.
Zhao is talking here about Suchir Balaji, a 26-year-old former researcher at OpenAI, the firm best known for creating ChatGPT. Balaji died by apparent suicide in his apartment in San Francisco. He had publicly charged OpenAI with potential copyright violations, and he left the company because of ethical concerns.