This is an implementation of the SRGAN model proposed in the paper (Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network) with TensorFlow.
- Python 3
- TensorFlow 1.1
- OpenCV
- dlib
Download the ImageNet dataset and preprocess them with:
$ cd vgg19/imagenet
$ python get_urls.py
$ python create_db.py
$ python download_images.py
$ python preprocess.py
Train with:
$ cd vgg19
$ python train.py
Or you can download the pretrained model file: vgg19_model.tar.gz
Download the LFW dataset and preprocess them with:
$ cd /src/lfw
$ python lfw.py
Train with:
$ cd src
$ python train.py
The result will be stored in "src/result".
This implementation adopts the least squares loss function instead of the sigmoid cross entropy loss function for the discriminator. See the details: Least Squares Generative Adversarial Networks
The paper says VGG54 is the perceptually most convincing results. But this implemetation uses all the feature maps generated by every layer (i.e. phi12, phi22, phi34, phi44, phi54) within the VGG19 network.