CNN-generated images are surprisingly easy to spot...for now

Sheng-Yu Wang1
Oliver Wang2
Richard Zhang2
Andrew Owens1
Alexei A. Efros1
1UC Berkeley
2Adobe Research

Code (coming soon) [GitHub]

ArXiv 2019 [Paper]

Are CNN-generated images hard to distinguish from real images? We show that a classifier trained to detect images generated by only one CNN (ProGAN, far left) can detect those generated by many other models (remaining columns).


In this work we ask whether it is possible to create a ``universal'' detector for telling apart real images from these generated by a CNN, regardless of architecture or dataset used. To test this, we collect a dataset consisting of fake images generated by 11 different CNN-based image generator models, chosen to span the space of commonly used architectures today (ProGAN, StyleGAN, BigGAN, CycleGAN, StarGAN, GauGAN, DeepFakes, cascaded refinement networks, implicit maximum likelihood estimation, second-order attention super-resolution, seeing-in-the-dark). We demonstrate that, with careful pre- and post-processing and data augmentation, a standard image classifier trained on only one specific CNN generator (ProGAN) is able to generalize surprisingly well to unseen architectures, datasets, and training methods (including the just released StyleGAN2). Our findings suggest the intriguing possibility that today's CNN-generated images share some common systematic flaws, preventing them from achieving realistic image synthesis.


Despite the alarm that has been raised by the rapidly improving quality of image synthesis methods, our results suggest that today's CNN-generated images retain detectable fingerprints that distinguish them from real photos. This allows forensic classifiers to generalize from one model to another without extensive adaptation.

However, this does not mean that the current situation will persist. Due to the difficulties in achieving Nash equilibria, none of the current GAN-based architectures are optimized to convergence, i.e. the generator never wins against the discriminator. Were this to change, we would suddenly find ourselves in a situation when synthetic images are completely indistinguishable from real ones.

Even with the current techniques, there remain practical reasons for concern. First, even the best forensics detector will have some trade-off between true detection and false-positive rates. Since a malicious user is typically looking to create a single fake image (rather than a distribution of fakes), they could simply hand-pick the fake image which happens to pass the detection threshold. Second, malicious use of fake imagery is likely be deployed on a social media platform (Facebook, Twitter, YouTube, etc.), so the data will undergo a number of often aggressive transformations (compression, resizing, re-sampling, etc.). While we demonstrated robustness to some degree of JPEG compression, blurring, and resizing, much more work is needed to evaluate how well the current detectors can cope with these transformations in-the-wild. Finally, most documented instances of effective deployment of visual fakes to date have been using classic "shallow" methods, such as Photoshop. We have experimented with running our detector on the face-aware liquify dataset from [Wang et al. ICCV 2019], and found that our method performs at chance on this data. This suggests that shallow methods exhibit fundamentally different behavior than deep methods, and should not be neglected.

We note that detecting fake images is just one small piece of the puzzle of how to combat the threat of visual disinformation. Effective solutions will need to incorporate a wide range of strategies, from technical to social to legal.

Code and Models

Coming soon! [GitHub]


S.-Y. Wang, O. Wang, R. Zhang, A. Owens, A. A. Efros.
CNN-generated images are surprisingly easy to spot...for now
In Arxiv, 2019 (Paper)


We'd like to thank Jaakko Lehtinen, Taesung Park, and Jacob (Minyoung) Huh for helpful discussions. We are grateful to Xu Zhang for significant help with comparisons. This work was funded, in part, by DARPA MediFor, an Adobe gift, and a grant from the UC Berkeley Center for Long-Term Cybersecurity. The views, opinions and/or findings expressed are those of the authors and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.