Skip to content

Ben Zhao

1 post with the tag “Ben Zhao”

Poisoning AI art

Protecting art from being used to train AI

Many artists are frustrated their work being used to train generative AI without their consent. Ben Zhao’s group have come up with two solutions artists can apply to their art.

Defensive measures from AI

To help achieve this, Ben Zhao’s lab at UChicago came up with Glaze, a protective filter which you can apply to your art. While not making much difference how the art appears to a human eye, the idea is that it adds a subtle amount of noise which disrupts the ability for AI models to learn from the glazed artwork. This is an example of ‘Adversarial Perturbations’. See the paper here: Glaze

Offensive measures for AI

Rather than trying to protect your art from being stolen, Nightshade attempts to poison the model. Again, while looking very similar to the human-eye, this filter can have a large effect on the AI image output. Adding noise in an attempt to hamper the models ability accurately recreate images from prompts, it worsens the image output by the AI. See the paper here: Nightshade

What next?

This will continue to be a cat-and-mouse game for the foreseeable future. AI models will be improved to evade these measures, and in response, these measures will be updated to become effective again. Ben Zhao mentions this in an article here, in response to a paper where the authors overcome Glaze.