Glaze and Nightshade are tools designed to discourage unauthorized scraping of original content by text-to-image models. Glaze works to conceal the stylistic information of your image by applying subtle (usually imperceivable) changes to the image itself. Nightshade can be used to poison ai models by allowing users to insert misleading information about the content of an image.

Link to the University of Chicago’s User Guide