Skip to content

MegEngine/YOLOX

Repository files navigation

Introduction

YOLOX is an anchor-free version of YOLO, with a simpler design but better performance! It aims to bridge the gap between research and industrial communities. For more details, please refer to our report on Arxiv.

This repo is an implementation of MegEngine version YOLOX, there is also a PyTorch implementation.

Updates!!

  • 【2021/08/05】 We release MegEngine version YOLOX.

Comming soon

  • Faster YOLOX training speed.
  • More models of megEngine version.
  • AMP training of megEngine.

Benchmark

Light Models.

Model size mAPval
0.5:0.95
Params
(M)
FLOPs
(G)
weights
YOLOX-Tiny 416 32.2 5.06 6.45 github

Standard Models.

Comming soon!

Quick Start

Installation

Step1. Install YOLOX.

git clone [email protected]:MegEngine/YOLOX.git
cd YOLOX
pip3 install -U pip && pip3 install -r requirements.txt
pip3 install -v -e .  # or  python3 setup.py develop

Step2. Install pycocotools.

pip3 install cython; pip3 install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'
Demo

Step1. Download a pretrained model from the benchmark table.

Step2. Use either -n or -f to specify your detector's config. For example:

python tools/demo.py image -n yolox-tiny -c /path/to/your/yolox_tiny.pkl --path assets/dog.jpg --conf 0.25 --nms 0.45 --tsize 416 --save_result --device [cpu/gpu]

or

python tools/demo.py image -f exps/default/yolox_tiny.py -c /path/to/your/yolox_tiny.pkl --path assets/dog.jpg --conf 0.25 --nms 0.45 --tsize 416 --save_result --device [cpu/gpu]

Demo for video:

python tools/demo.py video -n yolox-s -c /path/to/your/yolox_s.pkl --path /path/to/your/video --conf 0.25 --nms 0.45 --tsize 416 --save_result --device [cpu/gpu]
Reproduce our results on COCO

Step1. Prepare COCO dataset

cd <YOLOX_HOME>
ln -s /path/to/your/COCO ./datasets/COCO

Step2. Reproduce our results on COCO by specifying -n:

python tools/train.py -n yolox-tiny -d 8 -b 128
  • -d: number of gpu devices
  • -b: total batch size, the recommended number for -b is num-gpu * 8

When using -f, the above commands are equivalent to:

python tools/train.py -f exps/default/yolox-tiny.py -d 8 -b 128
Evaluation

We support batch testing for fast evaluation:

python tools/eval.py -n  yolox-tiny -c yolox_tiny.pkl -b 64 -d 8 --conf 0.001 [--fuse]
  • --fuse: fuse conv and bn
  • -d: number of GPUs used for evaluation. DEFAULT: All GPUs available will be used.
  • -b: total batch size across on all GPUs

To reproduce speed test, we use the following command:

python tools/eval.py -n  yolox-tiny -c yolox_tiny.pkl -b 1 -d 1 --conf 0.001 --fuse
Tutorials

MegEngine Deployment

MegEngine in C++

Dump mge file

NOTE: result model is dumped with optimize_for_inference and enable_fuse_conv_bias_nonlinearity.

python3 tools/export_mge.py -n yolox-tiny -c yolox_tiny.pkl --dump_path yolox_tiny.mge

Benchmark

  • Model Info: yolox-s @ input(1,3,640,640)

  • Testing Devices

    • x86_64 -- Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
    • AArch64 -- Xiaomi phone mi9
    • CUDA -- 1080TI @ cuda-10.1-cudnn-v7.6.3-TensorRT-6.0.1.5.sh @ Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
[email protected] +fastrun +weight_preprocess (msec) 1 thread 2 thread 4 thread 8 thread
x86_64(fp32) 516.245 318.29 253.273 222.534
x86_64(fp32+chw88) 362.020 NONE NONE NONE
aarch64(fp32+chw44) 555.877 351.371 242.044 NONE
aarch64(fp16+chw) 439.606 327.356 255.531 NONE
CUDA @ CUDA (msec) 1 batch 2 batch 4 batch 8 batch 16 batch 32 batch 64 batch
megengine(fp32+chw) 8.137 13.2893 23.6633 44.470 86.491 168.95 334.248

Third-party resources

Cite YOLOX

If you use YOLOX in your research, please cite our work by using the following BibTeX entry:

 @article{yolox2021,
  title={YOLOX: Exceeding YOLO Series in 2021},
  author={Ge, Zheng and Liu, Songtao and Wang, Feng and Li, Zeming and Sun, Jian},
  journal={arXiv preprint arXiv:2107.08430},
  year={2021}
}