by Yi Wang, Ying-Cong Chen, Xiangyu Zhang, Jian Sun, Jiaya Jia. The code will be updated.
This repository gives the implementation of our method in CVPR 2020 paper, 'Attentive Normalization for Conditional Image Generation'. This paper studies conducting visual long-range dependency modeling in an normalization manner, verified both in class-conditional image generation and image inpainting tasks.
We normalize the input feature maps spatially according to the semantic layouts predicted from them. It improves the distant relationship in the input as well as preserving semantics spatially.
Our method is built upon instance normalization (IN). It contains semantic layout learning module (semantic layout prediction + self-sampling regularization) and regional normalization.
This module can be applied to the current GAN-based conditional image generation tasks, e.g., class-conditional image generation and image inpainting.
In common practice, Attentive Normalization is placed between the convolutional layer and the activation layer. In the testing phase, we remove the randomness in AttenNorm by switching off its self-sampling branch. Thus, the generation procedure is deterministic only affected by the input.
The TensorFlow implementation of our attentive normalization is given in inpaint_attnorm.
If our method is useful for your research, please consider citing:
@article{wang2020attentive,
title={Attentive Normalization for Conditional Image Generation},
author={Wang, Yi and Chen, Ying-Cong and Zhang, Xiangyu and Sun, Jian and Jia, Jiaya},
journal={arXiv preprint arXiv:2004.03828},
year={2020}
}
Our TensorFlow code is built upon DeepFill (v1).
Please send email to [email protected].