Heon Song, Daiki Suehiro, Seiichi Uchida
For visual object tracking, it is difficult to realize an almighty online tracker due to the huge variations of target appearance depending on an image sequence. This paper proposes an online tracking method that adaptively aggregates arbitrary multiple online trackers. The performance of the proposed method is theoretically guaranteed to be comparable to that of the best tracker for any image sequence, although the best expert is unknown during tracking. The experimental study on the large variations of benchmark datasets and aggregated trackers demonstrates that the proposed method can achieve state-of-the-art performance.
In this repository, we implemented or edited the following trackers to use as experts.
You can use the trackers with just a few lines of code.
Tracker | Link |
---|---|
ATOM (CVPR 2019) | Paper / Original Repo |
DaSiamRPN (ECCV 2018) | Paper / Original Repo |
DiMP (ICCV 2019) | Paper / Original Repo |
DROL (AAAI 2020) | Paper / Original Repo |
GradNet (ICCV 2019) | Paper / Original Repo |
KYS (ECCV 2020) | Paper / Original Repo |
MemDTC (TPAMI 2019) | Paper / Original Repo |
MemTrack (ECCV 2018) | Paper / Original Repo |
Ocean (ECCV 2020) | Paper / Original Repo |
PrDiMP (CVPR 2020) | Paper / Original Repo |
RLS-RTMDNet (CVPR 2020) | Paper / Original Repo |
ROAM (CVPR 2020) | Paper / Original Repo |
RPT (CVPR 2020) | Paper / Original Repo |
SiamBAN (CVPR 2020) | Paper / Original Repo |
SiamCAR (CVPR 2020) | Paper / Original Repo |
SiamDW (CVPR 2019) | Paper / Original Repo |
SiamFC (ECCVW 2016) | Paper / Original Repo |
SiamFC++ (AAAI 2020) | Paper / Original Repo |
SiamMCF (ECCVW 2018) | Paper / Original Repo |
SiamR-CNN (CVPR 2020) | Paper / Original Repo |
SiamRPN (CVPR 2018) | Paper / Original Repo |
SiamRPN++ (CVPR 2019) | Paper / Original Repo |
SPM (CVPR 2019) | Paper / Original Repo |
Staple (CVPR 2016) | Paper / Original Repo |
THOR (BMVC 2019) | Paper / Original Repo |
For DaSiamRPN, RLS-RTMDNet, RPT and SPM, we've slightly modified the code to be compatible with Python3 and Pytorch >= 1.3.
We evaluated the performance of the experts and AAA on the following datasets.
- OTB2015[http://cvlab.hanyang.ac.kr/tracker_benchmark/index.html]
- NFS[http://ci2cv.net/nfs/index.html]
- UAV123[https://uav123.org/]
- TColor128[http://www.dabi.temple.edu/~hbling/data/TColor-128/TColor-128.html]
- TrackingNet[https://tracking-net.org/]
- VOT2018[http://www.votchallenge.net/]
- LaSOT[https://cis.temple.edu/lasot/download.html]
- Got10K[http://got-10k.aitestunion.com/]
VOT2018 is evaluated in unsupervised experiment as same as other datasets.
The following frameworks were used to conveniently track videos and evaluate trackers.
- pytracking[https://github.com/visionml/pytracking] for tracking datasets.
- pysot-toolkit[https://github.com/StrangerZhang/pysot-toolkit] for evaluating trackers.
We strongly recommend using a virtual environment like Anaconda or Docker.
The following is how to build the virtual environment for AAA using anaconda.
# clone this repository
git clone https://github.com/songheony/AAA-journal
cd AAA-journal
# create and activate anaconda environment
conda create -y -n [ENV_NAME] python=[PYTHON_VERSION>=3]
conda activate [ENV_NAME]
# install requirements
bash install_for_aaa.sh
If you want to apply AAA to your own project,
simply make the following python script:
from algorithms.aaa import AAA
img_paths = [] # list of image file paths
initial_bbox = [x, y, w, h] # left x, top y, width, height of the initial target bbox
n_experts = 6 # the number of experts you are using
# define AAA
theta, gamma = 0.92, 11 # you can tune hyperparameters by running run_tuning.sh
algorithm = AAA(n_experts, mode="LOG_DIR", threshold=theta, feature_factor=gamma)
# initialize AAA
algorith.initialize(img_paths[0], initial_bbox)
# track the target
for img_path in img_paths[1:]:
experts_result = np.zeros((n_experts, 4)) # the matrix of experts' estimation
# state is the prediction of target bbox.
# if the frame is not anchor frame, offline is None. else offline will be offline tracking results.
# weight is the weight of the experts.
state, offline, weight = self.track(img_path, experts_result)
In addition, trackers that we have implemented can be easily executed the following python script.
from select_options import select_expert
img_paths = [] # list of image file paths
initial_bbox = [x, y, w, h] # left x, top y, width, height of the initial target bbox
# define Expert
tracker_name = "DiMP"
tracker = select_expert(tracker_name)
# initialize Expert
tracker.initialize(img_paths[0], initial_bbox)
# track the target
for img_path in img_paths[1:]:
# state is the prediction of target bbox.
state = self.track(img_path)
- PyTorch 1.6.0
- Tensorflow 1.14.0
- CUDA 10.1
- GCC 8
First, metafiles including pretrained weights are need to be donloaded.
And, the path of metafiles in path_config.py
and local.py
file must be edited.
In order to run experts, you need to install additional libraries.
We offer install script to make it easy to run experts:
# Only for Ubuntu
sudo apt install -y libopenmpi-dev libgl1-mesa-glx ninja-build
# activate anaconda environment
conda activate [ENV_NAME]
# install requirements
bash install_for_experts.sh
We provide scripts to reproduce all results, figures, and tables in our paper.
In addition, we provide the following files in case you don't have time to run all the scripts yourself.
Experts tracking results
AAA tuning results
AAA tracking results
HDT tracking results
MCCT tracking results
Baselines tracking results
# Run experts
# If you've downloaded Experts tracking results, you can skip this command
bash run_experts.sh
# Tune the hyperparameter
# If you've downloaded AAA tuning results, you can skip this command
bash run_tuning.sh
# Run AAA
# If you've download AAA tracking results, you can skip this command
bash run_algorithm.sh
# Run HDT
# If you've download HDT tracking results, you can skip this command
bash run_hdt.sh
# Run MCCT
# If you've download MCCT tracking results, you can skip this command
bash run_mcct.sh
# Run simple baselines
# If you've download Baselines tracking results, you can skip this command
bash run_baselines.sh
# Visualize figures and tables in our paper
python visualize_figure.py
The code is supposed to run algorithms after running experts for test.
However, it is easy to modify the code to do both simultaneously.
If you find AAA useful in your work, please cite our paper:
@article{song2020aaa,
title={AAA: Adaptive Aggregation of Arbitrary Online Trackers with Theoretical Performance Guarantee},
author={Song, Heon and Suehiro, Daiki and Uchida, Seiichi},
journal={arXiv preprint arXiv:2009.09237},
year={2020}
}
👤 Heon Song
- Github: @songheony
- Contact: [email protected]