Perceptual Artifacts Localization for Image Synthesis Tasks
Lingzhi Zhang Zhengjie Xu Connelly Barnes Yuqian Zhou Qing Liu
He Zhang Zhe Lin Sohrab Amirghodsi Eli Shechtman Jianbo Shi
[Paper]
[Demo]
[Code]
[Dataset]
Note: the pink mask indicates the perceptual artifacts region predicted by our method.

Abstract

Recent advancements in deep generative models have made it possible to produce photo-realistic images for a variety of tasks. However, generated images often have perceptual artifacts in certain regions that require manual retouching. In this paper, we conduct an extensive empirical study of Perceptual Artifacts Localization (PAL) on diverse image synthesis tasks. We introduce a new dataset of 10,168 generated images with per-pixel perceptual artifact labels for ten image synthesis tasks. We successfully train a segmentation model on this proposed dataset to reliably localize artifacts in diverse tasks and demonstrate that our pretrained model can efficiently adapt to unseen models with as few as ten images. Moreover, we propose a simple yet effective zoom-in inpainting pipeline to automatically fix perceptual artifacts in generated images. In our experiments, we illustrate several useful downstream applications, including automatically fixing artifacts, evaluating image quality without reference, and detecting abnormal regions in images.



Code

Short description if wanted

 [GitHub]


Paper and Supplementary Material

F. Author, S. Author, T. Author.
Creative and Descriptive Paper Title.
In Conference, 20XX.
(hosted on ArXiv)


[Bibtex]


Acknowledgements

This template was originally made by Phillip Isola and Richard Zhang for a colorful ECCV project; the code can be found here.