Ben Zhao
👤 PersonAppearances Over Time
Podcast Appearances
We will actually generate a nice looking cow with nothing particularly distracting in the background. And the cow is staring you right in the face.
We will actually generate a nice looking cow with nothing particularly distracting in the background. And the cow is staring you right in the face.
We will actually generate a nice looking cow with nothing particularly distracting in the background. And the cow is staring you right in the face.
Glaze is all about how do we protect individual artists so that a third party does not mimic them using some local model. It's much less about these model training companies than it is about individual users who say, gosh, I like so-and-so's art, but I don't want to pay them. So in fact, what I'll do is I'll take my local copy of a model. I'll fine tune it on that artist's artwork and
Glaze is all about how do we protect individual artists so that a third party does not mimic them using some local model. It's much less about these model training companies than it is about individual users who say, gosh, I like so-and-so's art, but I don't want to pay them. So in fact, what I'll do is I'll take my local copy of a model. I'll fine tune it on that artist's artwork and
Glaze is all about how do we protect individual artists so that a third party does not mimic them using some local model. It's much less about these model training companies than it is about individual users who say, gosh, I like so-and-so's art, but I don't want to pay them. So in fact, what I'll do is I'll take my local copy of a model. I'll fine tune it on that artist's artwork and
and then have that model try to mimic them and their style so that I can ask a model to output artistic works that look like human art from that artist, except I don't have to pay them anything.
and then have that model try to mimic them and their style so that I can ask a model to output artistic works that look like human art from that artist, except I don't have to pay them anything.
and then have that model try to mimic them and their style so that I can ask a model to output artistic works that look like human art from that artist, except I don't have to pay them anything.
What it does is it takes images, it alters them in such a way that they basically look like they're the same, but to a particular AI model that's trying to train on this, what it sees are the visual features that actually associate it with something entirely different.
What it does is it takes images, it alters them in such a way that they basically look like they're the same, but to a particular AI model that's trying to train on this, what it sees are the visual features that actually associate it with something entirely different.
What it does is it takes images, it alters them in such a way that they basically look like they're the same, but to a particular AI model that's trying to train on this, what it sees are the visual features that actually associate it with something entirely different.
For example, you can take an image of a cow eating grass in a field, and if you apply it to nightshade, perhaps that image instead teaches not so much the bovine cow features, but the features of a 1940s pickup truck.
For example, you can take an image of a cow eating grass in a field, and if you apply it to nightshade, perhaps that image instead teaches not so much the bovine cow features, but the features of a 1940s pickup truck.
For example, you can take an image of a cow eating grass in a field, and if you apply it to nightshade, perhaps that image instead teaches not so much the bovine cow features, but the features of a 1940s pickup truck.
What happens then is that as that image goes into the training process, that label of this is a cow will become associated in the model that's trying to learn about what does a cow look like. It's going to read this image and in its own language, that image is going to tell it that a cow has four wheels. A cow has a big hood and a fender and a trunk.
What happens then is that as that image goes into the training process, that label of this is a cow will become associated in the model that's trying to learn about what does a cow look like. It's going to read this image and in its own language, that image is going to tell it that a cow has four wheels. A cow has a big hood and a fender and a trunk.
What happens then is that as that image goes into the training process, that label of this is a cow will become associated in the model that's trying to learn about what does a cow look like. It's going to read this image and in its own language, that image is going to tell it that a cow has four wheels. A cow has a big hood and a fender and a trunk.
Nightshade images tend to be much more potent than usual images, so that even when they've just seen a few hundred of them, they are willing to throw away everything that they've learned from the hundreds of thousands of other images of cows and declare that its understanding has now adapted to this new understanding, that in fact cows have a shiny bumper and four wheels.
Nightshade images tend to be much more potent than usual images, so that even when they've just seen a few hundred of them, they are willing to throw away everything that they've learned from the hundreds of thousands of other images of cows and declare that its understanding has now adapted to this new understanding, that in fact cows have a shiny bumper and four wheels.