We introduce the Deep Motion Modeling Network (DMM-Net) that performs implicit detection and association of the objects in an end-to-end manner. DMM-Net models comprehensive object features over multiple frames and simultaneously infers object motion parameters, categories and visibilities. These outputs are readily used to update the tracklets for efficient MOT. DMM-Net achieves PR-MOTA score of 12.80 @ 120+ fps for jointly performing detection and tracking on the popular UA-DETRAC challenge - orders of magnitude faster than the existing methods with better performance.
DMM-Net on Omni-MOT dataset
DMM-Net on UA-DETRAC dataset
Date | Event |
---|---|
201911 | Finish the papers :-) |
201910 | Preparing papers |
201908 | Get Result on Omini-MOT dataset |
201908 | Can Train on Omini-MOT dataset |
201907 | Can Train on MOT17 dataset |
201906 | Can Train on ``CVPR 2019 Tracking Challenge'' |
201905 | Can Train On the Whole UA-DETRAC dataset |
201905 | Design the tracker |
201904 | Recording Five Cities Training Dataset |
201903 | Start A Plan of Create New Dataset |
201902 | Optimized this network |
201812 | Can Do the Basic Detection |
201811 | Design the Loss Fucntion |
201810 | Try the UA-DETRAC dataset |
201809 | Re-design the input and output |
201808 | Design the whole network |
201807 | Start this idea |
- respectively denote the number of input frames, object categories (0 for `background'), time-related motion parameters, and anchor tunnels.
- are the frame width, and frame height.
- denotes the video frame at time . Subsequently, a 4-D tensor denotes video frames from time to . For simplicity, we often ignore the subscript ``''.
- respectively denote the ground truth boxes, categories, and visibilities in the selected video frames from time to . The text also ignores ``'' for these notations.
- denote the estimated motion parameters, categories, and visibilities. With time stamps and frames clear from the context, we simplify these notations as .
Schematics of end-to-end trainable DMM-Net: frames and their time stamps are input to the network. The frame sequence is first processed with a Feature Extractor comprising 3D ResNet-like convolutional groups. Outputs of selected groups are processed by Motion Subnet, Classifier Subnet, and Visibility Subnet. Each sub-network uses 3D convolutions to learn features that are concatenated and used to predict motion parameters (), object categories (), and visibility (), where and denote the number of anchor tunnels, motion parameters and object categories.
We directly deploy the trained network into the DMM Tracker (DMMT), as shown in the following figure. frames are processed by the tracker, where the trained DMM-Net selects frames as its input, and outputs predicted tunnels containing all possible object's motion parameter matrice , category matrice and visibility matrice , which are then filtered by the Tunnel Filter. After that, the track set is updated by associating the filtered tunnels by their IOU with previous track set .
This tracker can achieve 120+ fps for jointly performing detection and tracking.
Name | Version |
---|---|
Python | 3.6 |
CUDA | >=8.0 |
Besides, install all the python package by following command
cd <project path>
pip install -r requirement.txt
-
Clone this repository
git clone <repository url>
-
Download the pre-trained base net model, and save it into
<project path>/weights/resnext-101-64f-kinetics.pth
- Download the training dataset and testing dataset from [baidu] or [dropbox no space availabe :-(]
-
Modify the
<project path>/config/__init__.py
to15. # configure_name = 'config_gpu4_ua.json' 16. # configure_name = 'config_gpu4_ua_test.json' 17. # configure_name = 'config_gpu4_ua_with_amot.json' 18. configure_name = 'config_gpu4_amot.json'
-
Modify the
<project path>/config/config_gpu4_amot.json
1. { 2. "dataset_name": "AMOTD", 3. "dataset_path": <your downloaded omotd folder>, 4. "phase": "test", ... 24. "test": { 25. "resume": <your downloaded weights>, 26. "dataset_type": <you can use "train" or "test">, ... 30. "base_net_weights": null, 31. "log_save_folder": <your log save folder>, 32. "image_save_folder": <your image save folder>, 33. "weights_save_folder": <your weights save folder>,
-
Activate your python environment, and run
cd <project folder> python test_tracker_amotd.py
-
Modify the
<project path>/config/__init__.py
to15. # configure_name = 'config_gpu4_ua.json' 16. # configure_name = 'config_gpu4_ua_test.json' 17. # configure_name = 'config_gpu4_ua_with_amot.json' 18. configure_name = 'config_gpu4_amot.json'
-
Modify the
<project path>/config/config_gpu4_amot.json
1. { 2. "dataset_name": "AMOTD", 3. "dataset_path": <your downloaded omotd folder>, ... 4. "phase": "train", ... 64. "train": { 65. "resume": null, 66. "batch_size": 8 <you can change it according to your gpu capability>, ... 71. "log_save_folder": <your log save folder>, 72. "image_save_folder": <your image save folder>, 73. "weights_save_folder": <your weights save folder>, ...
-
Activate your python environment, and run
cd <project folder> python train_amot.py
- Download the training and testing dataset from [UA-DETRAC Official Site] or [baidu]
-
Modify the
<project path>/config/__init__.py
to15. configure_name = 'config_gpu4_ua.json' 16. # configure_name = 'config_gpu4_ua_test.json' 17. # configure_name = 'config_gpu4_ua_with_amot.json' 18. # configure_name = 'config_gpu4_amot.json'
-
Modify the
<project path>/config/config_gpu4_ua.json
1. { 2. "dataset_name": "UA-DETRAC", 3. "dataset_path": <the UA-DETRAC dataset folder>, 4. "phase": "test", ... 36. "test": { 37. "resume": <network weights file>, 38. "dataset_type": "test", ... 42. "base_net_weights": null, 43. "log_save_folder": <your log save folder>, 44. "image_save_folder": <your image save folder>, 45. "weights_save_folder": <your network weights save folder>,
-
Activate your python environment, and run
cd <project folder> python test_tracker_ua.py
-
Modify the
<project path>/config/__init__.py
to15. configure_name = 'config_gpu4_ua.json' 16. # configure_name = 'config_gpu4_ua_test.json' 17. # configure_name = 'config_gpu4_ua_with_amot.json' 18. # configure_name = 'config_gpu4_amot.json'
-
Modify the
<project path>/config/config_gpu4_ua.json
1. { 2. "dataset_name": "UA-DETRAC", 3. "dataset_path": <the UA-DETRAC dataset folder>, 4. "phase": "train", ... 81. "log_save_folder": <your log save folder>, 82. "image_save_folder": <your image save folder>, 83. "weights_save_folder": <your network weights save folder>,
-
activate your python environment, and run
cd <project folder> python train_ua.py
@inproceedings{ShiJie20,
author = {Shijie Sun, Naveed Aktar, XiangYu Song, Huansheng Song, Ajmal Mian, Mubarak Shah},
title = {Simultaneous Detection and Tracking with Motion Modelling for Multiple Object Tracking},
booktitle = {Proceedings of the European conference on computer vision (ECCV)}},
year = {2020}
This work is based on the Pytroch and 3D ResNet. It also inspired by SSD and DAN.
The methods provided on this page are published under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License . This means that you must attribute the work in the manner specified by the authors, you may not use this work for commercial purposes and if you alter, transform, or build upon this work, you may distribute the resulting work only under the same license. If you are interested in commercial usage you can contact us for further options.