Jitesh Jain†, Yuqian Zhou†, Ning Yu, Humphrey Shi, WACV 2023
† Equal Contribution
[Project Page
] [arXiv
] [pdf
] [BibTeX
]
This repo contains the code for our paper Keys to Better Image Inpainting: Structure and Texture Go Hand in Hand.
- [October 6, 2022]: You can host your own FcF-Inpainting demo using streamlit by following the instructions here.
- [September 5, 2022]: FcF-Inpainting is now available in the image inpainting tool Lama Cleaner. Thanks to @Sanster for integrating FcF-Inpainting into Lama Cleaner!
- [August 16, 2022]: FcF-Inpainting is accepted to WACV 2023!
- [August 5, 2022]: Project Page, ArXiv Preprint and GitHub Repo are public!
-
Clone the repo:
git clone https://github.com/SHI-Labs/FcF-Inpainting.git cd FcF-Inpainting
-
Create a conda environment:
conda create --name fcfgan python=3.7 conda activate fcfgan
-
Install Pytorch 1.7.1 and other dependencies:
pip3 install -r requirements.txt export TORCH_HOME=$(pwd) && export PYTHONPATH=.
-
Download the models for the high receptive perceptual loss:
mkdir -p ade20k/ade20k-resnet50dilated-ppm_deepsup/ wget -P ade20k/ade20k-resnet50dilated-ppm_deepsup/ http://sceneparsing.csail.mit.edu/model/pytorch/ade20k-resnet50dilated-ppm_deepsup/encoder_epoch_20.pth
-
Download data256x256.zip from gdrive.
mkdir -p datasets/ # unzip & split into train/test/visualization bash tools/prepare_celebahq.sh datasets ├── celeba-hq-dataset │ ├── train_256 │ ├── val_source_256 │ ├── visual_test_source_256
-
Generate 2k
(image, mask)
pairs to be used for evaluation.bash tools/prepare_celebahq_evaluation.sh
-
Download the Places2 dataset:
mkdir -p datasets/ mkdir datasets/places2_dataset/ wget http://data.csail.mit.edu/places/places365/train_large_places365challenge.tar tar -xvf train_large_places365challenge.tar -C datasets/places2_dataset/ mv datasets/places2_datasets/data_large datasets/places2_dataset/train wget http://data.csail.mit.edu/places/places365/val_large.tar tar -xvf val_large.tar -C datasets/places2_dataset/ mv datasets/places2_dataset/val_large datasets/places2_dataset/val datasets ├── places2_dataset │ ├── train │ ├── val
-
Generate 10k
(image, mask)
pairs to be used for validation during training.bash tools/prepare_places_val.sh
-
Generate 30k
(image, mask)
pairs to be used for evaluation.bash tools/prepare_places_evaluation.sh
-
Install Detectron2-v0.5.
python -m pip install detectron2==0.5 -f \ https://dl.fbaipublicfiles.com/detectron2/wheels/cu110/torch1.7/index.html
-
Download networks for segmentation masks:
mkdir -p ade20k/ade20k-resnet50dilated-ppm_deepsup/ wget -P ade20k/ade20k-resnet50dilated-ppm_deepsup/ http://sceneparsing.csail.mit.edu/model/pytorch/ade20k-resnet50dilated-ppm_deepsup/encoder_epoch_20.pth wget -P ade20k/ade20k-resnet50dilated-ppm_deepsup/ http://sceneparsing.csail.mit.edu/model/pytorch/ade20k-resnet50dilated-ppm_deepsup/decoder_epoch_20.pth
-
Generate
(image, mask)
pairs to be used for segmentation mask based evaluation.bash tools/prepare_places_segm_evaluation.sh
Note: The pairs are only generated for images with detected instances.
-
Execute the following command to start training for 25M images on 8 gpus with 16 images per gpu:
python train.py \ --outdir=training-runs-inp \ --img_data=datasets/places2_dataset/train \ --gpus 8 \ --kimg 25000 \ --gamma 10 \ --aug 'noaug' \ --metrics True \ --eval_img_data datasets/places2_dataset/evaluation/random_segm_256 --batch 128
Note: If the process hangs on
Setting up PyTorch plugin ...
, refer to this issue.
checkpoint | Description |
---|---|
places_512.pkl | Model trained on 512x512 for 25M Places2 images |
places.pkl | Model trained on 256x256 for 25M Places2 images |
celeba-hq.pkl | Model trained on 256x256 for 25M CelebA-HQ images |
-
Run the following command to calculate the metric scores (fid, ssim and lpips) using 8 gpus:
python evaluate.py \ --img_data=datasets/places2_dataset/evaluation/random_segm_256 \ --network=[path-to-checkpoint] \ --num_gpus=8
-
Run the following command and find the results in the
visualizations/
folder:python demo.py \ --img_data=datasets/demo/places2 \ --network=[path-to-checkpoint] \ --resolution 256
@inproceedings{jain2022keys,
title={Keys to Better Image Inpainting: Structure and Texture Go Hand in Hand},
author={Jitesh Jain and Yuqian Zhou and Ning Yu and Humphrey Shi},
booktitle={WACV},
year={2023}
}
Code is heavily based on the following repositories: stylegan2-ada-pytorch and lama.