Data Attribution for Text-to-Image Models by Unlearning Synthesized Images

Sheng-Yu Wang1
Aaron Hertzmann2
Alexei A. Efros3
Jun-Yan Zhu1
Richard Zhang2
1Carnegie Mellon University
2Adobe Research
3UC Berkeley

[Code]

[Paper]


Abstract

The goal of data attribution for text-to-image models is to identify the training images that most influence the generation of a new image. We can define "influence" by saying that, for a given output, if a model is retrained from scratch without that output's most influential images, the model should then fail to generate that output image. Unfortunately, directly searching for these influential images is computationally infeasible, since it would require repeatedly retraining from scratch. We propose a new approach that efficiently identifies highly-influential images. Specifically, we simulate unlearning the synthesized image, proposing a method to increase the training loss on the output image, without catastrophic forgetting of other, unrelated concepts. Then, we find training images that are forgotten by proxy, identifying ones with significant loss deviations after the unlearning process, and label these as influential. We evaluate our method with a computationally intensive but "gold-standard" retraining from scratch and demonstrate our method's advantages over previous methods.

Results

Attribution results on MSCOCO models.
(Left) We show generated samples used as a query and attributed training images identified by different methods. Our method retrieves images with more similar visual attributes. Notably, our method better matches the poses of the buses (considering random flips during training) and the poses and enumeration of skiers.
(Right) Next, we proceed with the counterfactual analysis, where we compare images across our method and baselines generated by leave-K-out models, using different K values, all under the same random noise and text prompt. A significant deviation in regeneration indicates that critical, influential images were identified by the attribution algorithm. While baselines regenerate similar images to the original, our method generates ones that deviate significantly, even with as few as 500 influential images removed (∼0.42% of the dataset).

Spatially-localized attribution. Given a synthesized image (left), we crop regions containing specific objects using GroundingDINO. We attribute each object separately by only running forgetting on the pixels within the cropped region. Our method can attribute different synthesized regions to different training images.


Evaluating on Customized Model Benchmark. We evaluate on this benchmark for attribution large-scale text-to-image models that focuses on a specialized form of attribution: attributing customized models trained on an individual or a few exemplar images. The red boxes indicate ground truth exemplar images used for customizing the model. DINO (AbC) and CLIP (AbC) correspond to DINO and CLIP features finetuned directly on the benchmark, respectively. Both our method and baselines successfully identify the exemplar images on object-centric models (left), while our method outperforms the baselines with artistic style models (right)


Paper


Sheng-Yu Wang, Aaron Hertzmann, Alexei A. Efros, Jun-Yan Zhu, Richard Zhang.
Data Attribution for Text-to-Image Models by Unlearning Synthesized Images.
In ArXiv, 2024. (Paper)
[Bibtex]



Acknowledgements

We thank Kristian Georgiev for answering all of our inquiries regarding JourneyTRAK implementation and evaluation, and providing us their models and an earlier version of JourneyTRAK code. We thank Nupur Kumari, Kangle Deng, Grace Su for feedback on the draft. This work is partly supported by the Packard Fellowship, JPMC Faculty Research Award, and NSF IIS-2239076. Website template is from Colorful Colorization.


Citation

@article{wang2024attributebyunlearning,
  title={Data Attribution for Text-to-Image Models by Unlearning Synthesized Images},
  author={Wang, Sheng-Yu and Hertzmann, Aaron and Efros, Alexei A and Zhu, Jun-Yan and Zhang, Richard},
  journal={arXiv preprint arXiv:2406.09408},
  year = {2024},
  }