Menu
Sign In Search Podcasts Charts People & Topics Add Podcast API Pricing

Adi Robertson

👤 Person
183 total appearances

Appearances Over Time

Podcast Appearances

Decoder with Nilay Patel
The AI election deepfakes have arrived

Yeah. And part of this is also political, that there was a huge, largely, again, in the U.S., right-wing backlash to this, that this was the kind of thing that would get a state attorney general mad at you and get a congressional committee to investigate you. as it ended up doing with pre-Musk Twitter. I think that, yeah, there became a real political price for doing this as well.

Decoder with Nilay Patel
The AI election deepfakes have arrived

Since then, some platforms have let Donald Trump back on. They've said, all right, but we cannot possibly moderate every single lie on this. We're going to just wash our hands of whether you're saying the election was stolen or not.

Decoder with Nilay Patel
The AI election deepfakes have arrived

Since then, some platforms have let Donald Trump back on. They've said, all right, but we cannot possibly moderate every single lie on this. We're going to just wash our hands of whether you're saying the election was stolen or not.

Decoder with Nilay Patel
The AI election deepfakes have arrived

Since then, some platforms have let Donald Trump back on. They've said, all right, but we cannot possibly moderate every single lie on this. We're going to just wash our hands of whether you're saying the election was stolen or not.

Decoder with Nilay Patel
The AI election deepfakes have arrived

The companies are in slightly different spots, but they actually have come together. Very recently, they've signed an accord that says, look, we're going to take this seriously. They've announced policies that are varying levels of strictness, but tend toward, you If you're a major AI company, you're going to try to prevent people from creating information that maybe looks bad for public figures.

Decoder with Nilay Patel
The AI election deepfakes have arrived

The companies are in slightly different spots, but they actually have come together. Very recently, they've signed an accord that says, look, we're going to take this seriously. They've announced policies that are varying levels of strictness, but tend toward, you If you're a major AI company, you're going to try to prevent people from creating information that maybe looks bad for public figures.

Decoder with Nilay Patel
The AI election deepfakes have arrived

The companies are in slightly different spots, but they actually have come together. Very recently, they've signed an accord that says, look, we're going to take this seriously. They've announced policies that are varying levels of strictness, but tend toward, you If you're a major AI company, you're going to try to prevent people from creating information that maybe looks bad for public figures.

Decoder with Nilay Patel
The AI election deepfakes have arrived

Maybe you ban producing images of recognizable figures altogether or you try to. And you have something in your terms of service that says if you're using this for political causes or if you're creating deceptive content, then we can kick you off.

Decoder with Nilay Patel
The AI election deepfakes have arrived

Maybe you ban producing images of recognizable figures altogether or you try to. And you have something in your terms of service that says if you're using this for political causes or if you're creating deceptive content, then we can kick you off.

Decoder with Nilay Patel
The AI election deepfakes have arrived

Maybe you ban producing images of recognizable figures altogether or you try to. And you have something in your terms of service that says if you're using this for political causes or if you're creating deceptive content, then we can kick you off.

Decoder with Nilay Patel
The AI election deepfakes have arrived

We don't know necessarily how good the enforcement of it is going to be, but the companies seem so far pretty open to the idea of self-regulation, in part because I think this isn't just a civic-minded political thing. Dealing with unflattering stuff about real people is just a minefield they don't want. That said, there are also just there are open source tools.

Decoder with Nilay Patel
The AI election deepfakes have arrived

We don't know necessarily how good the enforcement of it is going to be, but the companies seem so far pretty open to the idea of self-regulation, in part because I think this isn't just a civic-minded political thing. Dealing with unflattering stuff about real people is just a minefield they don't want. That said, there are also just there are open source tools.

Decoder with Nilay Patel
The AI election deepfakes have arrived

We don't know necessarily how good the enforcement of it is going to be, but the companies seem so far pretty open to the idea of self-regulation, in part because I think this isn't just a civic-minded political thing. Dealing with unflattering stuff about real people is just a minefield they don't want. That said, there are also just there are open source tools.

Decoder with Nilay Patel
The AI election deepfakes have arrived

Stability AI is pretty close to open source. It's pretty easy to go in and make a thing that builds on it that maybe strips away the safeguards you get in its public version. So it's just not quite equivalent to the way that, say, social platforms are able to completely control what's on their platforms.

Decoder with Nilay Patel
The AI election deepfakes have arrived

Stability AI is pretty close to open source. It's pretty easy to go in and make a thing that builds on it that maybe strips away the safeguards you get in its public version. So it's just not quite equivalent to the way that, say, social platforms are able to completely control what's on their platforms.

Decoder with Nilay Patel
The AI election deepfakes have arrived

Stability AI is pretty close to open source. It's pretty easy to go in and make a thing that builds on it that maybe strips away the safeguards you get in its public version. So it's just not quite equivalent to the way that, say, social platforms are able to completely control what's on their platforms.

Decoder with Nilay Patel
The AI election deepfakes have arrived

Does stopping mean that you're just trying to limit the spread to where this doesn't become a huge viral thing that a bunch of people see, but it still may be technically possible to create this? Or do you want to say, all right, we have a zero tolerance policy. If anything is created with any tool anywhere, even if someone keeps it to themselves, that is unconscionable.

Decoder with Nilay Patel
The AI election deepfakes have arrived

Does stopping mean that you're just trying to limit the spread to where this doesn't become a huge viral thing that a bunch of people see, but it still may be technically possible to create this? Or do you want to say, all right, we have a zero tolerance policy. If anything is created with any tool anywhere, even if someone keeps it to themselves, that is unconscionable.

Decoder with Nilay Patel
The AI election deepfakes have arrived

Does stopping mean that you're just trying to limit the spread to where this doesn't become a huge viral thing that a bunch of people see, but it still may be technically possible to create this? Or do you want to say, all right, we have a zero tolerance policy. If anything is created with any tool anywhere, even if someone keeps it to themselves, that is unconscionable.

Decoder with Nilay Patel
The AI election deepfakes have arrived

The most promising argument I've heard for these is the idea that you can – and this is an argument that Adobe has made to me – train people to expect a watermark. And so if what you're saying is we want to make it impossible to make these images without a watermark, I think that raises the same problems that we just talked about, which if anyone can make –