Data Attribution for Text-to-Image Models by Unlearning Synthesized Images

Sheng-Yu Wang1
Aaron Hertzmann2
Alexei A. Efros3
Jun-Yan Zhu1
Richard Zhang2
1Carnegie Mellon University
2Adobe Research
3UC Berkeley

[Code]

[Paper (NeurIPS 2024)]

[Slides]

[Poster]


Abstract

The goal of data attribution for text-to-image models is to identify the training images that most influence the generation of a new image. Influence is defined such that, for a given output, if a model is retrained from scratch without the most influential images, the model would fail to reproduce the same output. Unfortunately, directly searching for these influential images is computationally infeasible, since it would require repeatedly retraining models from scratch. In our work, we propose an efficient data attribution method by simulating unlearning the synthesized image. We achieve this by increasing the training loss on the output image, without catastrophic forgetting of other, unrelated concepts. We then identify training images with significant loss deviations after the unlearning process and label these as influential. We evaluate our method with a computationally intensive but "gold-standard" retraining from scratch and demonstrate our method's advantages over previous methods.

Results

Attribution results on MSCOCO models.
(Left) We show generated samples used as a query and attributed training images identified by different methods. Our method retrieves images with more similar visual attributes. Notably, our method better matches the poses of the buses (considering random flips during training) and the poses and enumeration of skiers.
(Right) Next, we proceed with the counterfactual analysis, where we compare images across our method and baselines generated by leave-K-out models, using different K values, all under the same random noise and text prompt. A significant deviation in regeneration indicates that critical, influential images were identified by the attribution algorithm. While baselines regenerate similar images to the original, our method generates ones that deviate significantly, even with as few as 500 influential images removed (∼0.42% of the dataset). Concurrent work D-TRAK performs on par with our method in this test case.

Spatially-localized attribution. Given a synthesized image (left), we crop regions containing specific objects using GroundingDINO. We attribute each object separately by only running forgetting on the pixels within the cropped region. Our method can attribute different synthesized regions to different training images.


Evaluating on Customized Model Benchmark. We evaluate on this benchmark for attribution large-scale text-to-image models that focuses on a specialized form of attribution: attributing customized models trained on an individual or a few exemplar images. The red boxes indicate ground truth exemplar images used for customizing the model. DINO (AbC) and CLIP (AbC) correspond to DINO and CLIP features finetuned directly on the benchmark, respectively. Both our method and AbC baselines successfully identify the exemplar images on object-centric models (left), while our method outperforms the baselines with artistic style models (right). D-TRAK doesn't perform as well in this test case.


Paper


Sheng-Yu Wang, Aaron Hertzmann, Alexei A. Efros, Jun-Yan Zhu, Richard Zhang.
Data Attribution for Text-to-Image Models by Unlearning Synthesized Images.
In NeurIPS, 2024. (Paper)
[Bibtex]



Acknowledgements

We thank Kristian Georgiev for answering all of our inquiries regarding JourneyTRAK implementation and evaluation, and providing us their models and an earlier version of JourneyTRAK code. We thank Nupur Kumari, Kangle Deng, Grace Su for feedback on the draft. This work is partly supported by the Packard Fellowship, JPMC Faculty Research Award, and NSF IIS-2239076. Website template is from Colorful Colorization.


Citation

@inproceedings{wang2024attributebyunlearning,
  title={Data Attribution for Text-to-Image Models by Unlearning Synthesized Images},
  author={Wang, Sheng-Yu and Hertzmann, Aaron and Efros, Alexei A and Zhu, Jun-Yan and Zhang, Richard},
  booktitle={NeurIPS},
  year = {2024},
  }