Skip to content

Latest commit

 

History

History
110 lines (59 loc) · 3.92 KB

model_zoo.md

File metadata and controls

110 lines (59 loc) · 3.92 KB

Model Zoo

Inpainting

Benchmark

Global&Local

Please refer to GL for details.

Partial Conv

Please refer to PConv for details.

DeepFillv1

Please refer to DeepFillv1 for details.

DeepFillv2

Please refer to DeepFillv2 for details.

Matting

Overview

Method SAD MSE GRAD CONN
DIM 50.62 0.0151 29.01 50.69
GCA 34.77 0.0080 16.33 32.20
IndexNet 45.56 0.0125 25.49 44.79

Above results follow the original implementation of these methods. However, they adopt different data augmentations and preprocessing pipelines. We also provide a benchmark for these methods under the same settings, i.e., using the same data augmentations as DIM. Results are shown as below.

Method SAD MSE GRAD CONN
DIM 50.62 0.0151 29.01 50.69
GCA* 49.42 0.0129 28.07 49.47
IndexNet* 50.11 0.0164 30.82 49.53

*: We only run one experiment under the setting.

Benchmark

Deep Image Matting (DIM)

Please refer to DIM for details.

GCA Matting

Please refer to GCA for details.

IndexNet Matting

Please refer to IndexNet for details.

Evaluation Details

Data

We provide a python script preprocess_comp1k_dataset.py for compositing Adobe Composition-1k (comp1k) dataset foreground images with MS COCO dataset background images. The result merged images are the same as the merged images produced by the official composite script by Adobe.

Evaluation Implementation Details

We provide a python script evaluate_comp1k.py for evaluating test results of matting models. The four evaluation metrics (SAD, MSE, GRAD and CONN) are calculated in the same way as the official evaluation script by Adobe. We observe only minor difference between the evaluation results of our python script and the official, which has no effect on the reported performance.

Restoration

Benchmark

EDSR

Please refer to EDSR for details.

EDVR

Please refer to EDVR for details.

ESRGAN

Please refer to ESRGAN for details.

SRCNN

Please refer to SRCNN for details.

SRResNet and SRGAN

Please refer to SRResNet and SRGAN for details.

TOF

Please refer to TOF for details.

Generation

Benchmark

pix2pix

Please refer to pix2pix for details.

CycleGAN

Please refer to CycleGAN for details.