Skip to content

Latest commit

 

History

History
97 lines (67 loc) · 3.84 KB

get_started.md

File metadata and controls

97 lines (67 loc) · 3.84 KB

Getting Started

This page provides basic usage based on MMdetection. For installation instructions, please see install.md.

Inference with pretrained models

We provide the testing scripts to evaluate the trained models.

Examples for VOC: Assume that you have already downloaded the checkpoints to work_dirs/voc_r50_3x/.

  1. Test with single GPU and get mask AP values.
CUDA_VISIBLE_DEVICES=0 python tools/test.py configs/boxlevelset/voc/box_levelset_voc_r50_fpn_3x.py \
    work_dirs/voc_r50_3x/xxx.pth  --eval segm
  1. Test with 4 GPUs and get mask AP values.
CUDA_VISIBLE_DEVICES=0,1,2,3 ./tools/dist_test.sh configs/boxlevelset/voc/box_levelset_voc_r50_fpn_3x.py \
    work_dirs/voc_r50_3x/xxx.pth 4 --eval segm 

Examples for COCO2017: Assume that you have already downloaded the checkpoints to work_dirs/coco_r50_3x/.

  1. Test with 8 GPUs and get mask AP values on val dataset.
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 ./tools/dist_test.sh configs/boxlevelset/coco/box_levelset_coco_r50_fpn_3x.py \
    work_dirs/coco_r50_3x/xxx.pth  8 --eval segm
  1. Test with 8 GPUs and get mask AP values on test-dev dataset.
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 ./tools/dist_test.sh configs/boxlevelset/coco/box_levelset_coco_r50_fpn_3x.py \
    work_dirs/coco_r50_3x/xxx.pth 8 --format-only --eval-options "jsonfile_prefix=work_dirs/r50_coco_dev" 

Generate the json results, and submit to the COCO challenge server for test-dev performance evaluation.

Inference for visual results

  1. Test for Pascal VOC

     CUDA_VISIBLE_DEVICES=0 python tools/test.py configs/boxlevelset/voc/box_levelset_voc_r50_fpn_3x.py \
     work_dirs/voc_r50_3x/xxx.pth  --show-dir work_dirs/vis_pascal_voc_r50/
  2. Test for COCO

     CUDA_VISIBLE_DEVICES=0 python tools/test.py configs/boxlevelset/coco/box_levelset_coco_r50_fpn_3x.py \
     work_dirs/coco_r50_3x/xxx.pth  --show-dir work_dirs/vis_coco_r50/

Note: The visual results is in show-dir. The visual bounding box is generated by the predicted mask.

Train a model

MMDetection implements distributed training and non-distributed training, which uses MMDistributedDataParallel and MMDataParallel respectively.

All outputs (log files and checkpoints) will be saved to the working directory, which is specified by work_dir in the config file.

  1. Train with a single GPU
CUDA_VISIBLE_DEVICES=0 python tools/train.py ${CONFIG_FILE} 

Example:
CUDA_VISIBLE_DEVICES=0 python toos/train.py configs/voc/box_levelset_voc_r50_fpn_3x.py 
  1. Train with multiple GPUs
./tools/dist_train.sh ${CONFIG_FILE} ${GPU_NUM} [optional arguments]

Example for VOC (4GPUs):
CUDA_VISIBLE_DEVICES=0,1,2,3 ./tools/dist_train.sh configs/voc/box_levelset_voc_r50_fpn_3x.py  4

Example for COCO (8GPUs):
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 ./tools/dist_train.sh configs/coco/box_levelset_coco_r50_fpn_3x.py 8

Data preparation

  1. Pascal VOC(Augmented) is the extension of the training set of VOC 2012 with SBD follwing BBTP. The link of whole dataset with coco json format is here(GoogleDrive)

  2. This is office website of remote sensing iSAID for downloading the full dataset. The toolkit for iSAID is here for image splitting and json generation. We only use the train set for training, and val set for performance evaluation.

  3. Here is the medical dataset-LiTS. It also need the coco format preparation. We split randomly the training data set with GT mask to 4:1 train&val for performance evaluation.