This is the official implementation of CVPR 2024 paper "Amodal Ground Truth and Completion in the Wild" by Guanqi Zhan, Chuanxia Zheng, Weidi Xie, and Andrew Zisserman
Occlusion is very common, yet still a challenge for computer vision systems. This work introduces an automatic pipeline to obtain authentic amodal ground truth for real images and a new large-scale real image amodal benchmark with authentic amodal ground truth and covering a variety of categories. Additionally, two novel architectures, OccAmodal and SDAmodal, are proposed to handle the situations where the occluder mask is not annotated, and achieve the class-agnostic domain generalization, moving the reconstruction of occluded objects towards an ‘in the wild’ capability.
- pytorch>=0.4.1
pip install -r requirements.txt
pip install ipdb
-
Download COCO2014 train and val images from here and unzip.
-
Download COCOA annotations from here and untar.
-
Ensure the COCOA folder looks like:
COCOA/ |-- train2014/ |-- val2014/ |-- test2014/ |-- annotations/ |-- COCO_amodal_train2014.json |-- COCO_amodal_val2014.json |-- COCO_amodal_test2014.json |-- ...
Evaluation dataset: mp3d_eval.zip
Training dataset: mp3d_train.zip
Annotations: annotations (same with COCOA)
Clone the github https://github.com/Tsingularity/dift/tree/main, and put the files under dift/
of this github. Use dift/dift_sd.py
in this github to replace src/models/dift_sd.py
. Then fill in the paths and
python dift/extract_dift_amodal.py
sh tools/test_SDAmodal.sh
@Championchess [email protected]
@article{zhan2024amodal,
title={Amodal Ground Truth and Completion in the Wild},
author={Zhan, Guanqi and Zheng, Chuanxia and Xie, Weidi and Zisserman, Andrew},
journal={CVPR},
year={2024}
}