The Science Behind AI’s Ability to Recreate Art Styles
Table of Links
-
Discussion and Broader Impact, Acknowledgements, and References
D. Differences with Glaze Finetuning
H. Existing Style Mimicry Protections
G Methods for Style Mimicry
This section summarizes the existing methods that a style forger can use to perform style mimicry. Our work only considers finetuning since it is reported to be the most effective (Shan et al., 2023a).
G.1 Prompting
Well-known artistic styles contained in the training data (e.g. Van Gogh) can be mimicked by prompting a text-to-image model with a description of the style or the name of the artist. For example, a prompt can be augmented with “ painted in a cubistic style” “ painted by van Gogh” to mimic those styles, respectively. Prompting is easy to apply and does not require changes to the model. However, it fails to mimic styles that are not sufficiently represented in the training data of model—often from the most vulnerable artists.
G.2 Img2Img
Img2Img creates an updated version of an image with guidance from a prompt. For this, Img2Img processes image x with t timesteps of a diffusion process to obtain the diffused image xt. Then, Img2Img uses the model with guidance from prompt P to reverse the diffusion process into the output image variation xP . Analogous to prompting, a prompt suffices to transfer a well-known style, but Img2Img also fails for unknown styles.
G.3 Textual Inversion
Textual inversion (Gal et al., 2022) optimizes the embedding of some n new tokens t = [t1, . . . , tn] that are appended to image prompts P so that generations closely mimic the style of a given set of images. The tokens are optimized via gradient descent on the model training loss so that P + t generates images that mimic the target style. Textual inversion requires white-box access to the target model, but enables the mimicry of unknown styles.
G.4 Finetuning
Finetuning updates the weights of a pretrained text-to-image model to introduce a new functionality. In this case, finetuning allows a forger to “teach” the generative model an unknown style using a set of images in the target style and their captions (e.g. an astronaut riding a horse). First, all captions are augmented with some special word, like the name of the artist, to create prompts Px = Cx + “by w∗”. Then, the model weights are updated to minimize the reconstruction loss of the given images following the augmented prompts. At inference time, the forger can append “by w∗” to any prompt to obtain art in the target style
The authors of Glaze identify this finetuning setup as the strongest style mimicry method (Shan et al., 2023a). We validate the success of our style mimicry with a user study detailed in Appendix K.1