This page provides basic usage based on MMdetection. For installation instructions, please see install.md.
We provide the testing scripts to evaluate the trained models.
Examples for VOC:
Assume that you have already downloaded the checkpoints to work_dirs/voc_r50_3x/
.
- Test with single GPU and get mask AP values.
CUDA_VISIBLE_DEVICES=0 python tools/test.py configs/boxlevelset/voc/box_levelset_voc_r50_fpn_3x.py \
work_dirs/voc_r50_3x/xxx.pth --eval segm
- Test with 4 GPUs and get mask AP values.
CUDA_VISIBLE_DEVICES=0,1,2,3 ./tools/dist_test.sh configs/boxlevelset/voc/box_levelset_voc_r50_fpn_3x.py \
work_dirs/voc_r50_3x/xxx.pth 4 --eval segm
Examples for COCO2017:
Assume that you have already downloaded the checkpoints to work_dirs/coco_r50_3x/
.
- Test with 8 GPUs and get mask AP values on
val
dataset.
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 ./tools/dist_test.sh configs/boxlevelset/coco/box_levelset_coco_r50_fpn_3x.py \
work_dirs/coco_r50_3x/xxx.pth 8 --eval segm
- Test with 8 GPUs and get mask AP values on
test-dev
dataset.
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 ./tools/dist_test.sh configs/boxlevelset/coco/box_levelset_coco_r50_fpn_3x.py \
work_dirs/coco_r50_3x/xxx.pth 8 --format-only --eval-options "jsonfile_prefix=work_dirs/r50_coco_dev"
Generate the json results, and submit to the COCO challenge server for test-dev
performance evaluation.
-
Test for Pascal VOC
CUDA_VISIBLE_DEVICES=0 python tools/test.py configs/boxlevelset/voc/box_levelset_voc_r50_fpn_3x.py \ work_dirs/voc_r50_3x/xxx.pth --show-dir work_dirs/vis_pascal_voc_r50/
-
Test for COCO
CUDA_VISIBLE_DEVICES=0 python tools/test.py configs/boxlevelset/coco/box_levelset_coco_r50_fpn_3x.py \ work_dirs/coco_r50_3x/xxx.pth --show-dir work_dirs/vis_coco_r50/
Note: The visual results is in show-dir
. The visual bounding box is generated by the predicted mask.
MMDetection implements distributed training and non-distributed training,
which uses MMDistributedDataParallel
and MMDataParallel
respectively.
All outputs (log files and checkpoints) will be saved to the working directory,
which is specified by work_dir
in the config file.
- Train with a single GPU
CUDA_VISIBLE_DEVICES=0 python tools/train.py ${CONFIG_FILE}
Example:
CUDA_VISIBLE_DEVICES=0 python toos/train.py configs/voc/box_levelset_voc_r50_fpn_3x.py
- Train with multiple GPUs
./tools/dist_train.sh ${CONFIG_FILE} ${GPU_NUM} [optional arguments]
Example for VOC (4GPUs):
CUDA_VISIBLE_DEVICES=0,1,2,3 ./tools/dist_train.sh configs/voc/box_levelset_voc_r50_fpn_3x.py 4
Example for COCO (8GPUs):
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 ./tools/dist_train.sh configs/coco/box_levelset_coco_r50_fpn_3x.py 8
-
Pascal VOC(Augmented) is the extension of the training set of VOC 2012 with SBD follwing BBTP. The link of whole dataset with coco json format is here(GoogleDrive)
-
This is office website of remote sensing iSAID for downloading the full dataset. The toolkit for iSAID is here for image splitting and json generation. We only use the train set for training, and val set for performance evaluation.
-
Here is the medical dataset-LiTS. It also need the coco format preparation. We split randomly the training data set with GT mask to 4:1 train&val for performance evaluation.