Please refer to GL for details.
Please refer to PConv for details.
Please refer to DeepFillv1 for details.
Please refer to DeepFillv2 for details.
Method | SAD | MSE | GRAD | CONN |
---|---|---|---|---|
DIM | 50.62 | 0.0151 | 29.01 | 50.69 |
GCA | 34.77 | 0.0080 | 16.33 | 32.20 |
IndexNet | 45.56 | 0.0125 | 25.49 | 44.79 |
Above results follow the original implementation of these methods. However, they adopt different data augmentations and preprocessing pipelines. We also provide a benchmark for these methods under the same settings, i.e., using the same data augmentations as DIM. Results are shown as below.
Method | SAD | MSE | GRAD | CONN |
---|---|---|---|---|
DIM | 50.62 | 0.0151 | 29.01 | 50.69 |
GCA* | 49.42 | 0.0129 | 28.07 | 49.47 |
IndexNet* | 50.11 | 0.0164 | 30.82 | 49.53 |
*: We only run one experiment under the setting.
Please refer to DIM for details.
Please refer to GCA for details.
Please refer to IndexNet for details.
We provide a python script preprocess_comp1k_dataset.py for compositing Adobe Composition-1k (comp1k) dataset foreground images with MS COCO dataset background images. The result merged images are the same as the merged images produced by the official composite script by Adobe.
We provide a python script evaluate_comp1k.py for evaluating test results of matting models. The four evaluation metrics (SAD
, MSE
, GRAD
and CONN
) are calculated in the same way as the official evaluation script by Adobe. We observe only minor difference between the evaluation results of our python script and the official, which has no effect on the reported performance.
Please refer to EDSR for details.
Please refer to EDVR for details.
Please refer to ESRGAN for details.
Please refer to SRCNN for details.
Please refer to SRResNet and SRGAN for details.
Please refer to TOF for details.
Please refer to pix2pix for details.
Please refer to CycleGAN for details.