Rotation equivariance meets local feature matching.
Figure inspired by R2D2.
First, clone (or unzip) the repo:
git clone xxxx
Next, follow the steps to create a conda
environment.
sbatch jobscripts/create_gpu_env.sh
⌛ This step takes about 10 minutes.
Note: This has only been tested on the Lisa cluster. If you want to run the code on a CPU, please follow the instructions here. Although, we recommend using the GPU version.
Tip: To use the conda environment on the login node, you will need to run the following commands before activating the environment:
# NOTE: The following three commands are specific to our cluster and may not be needed on your system.
module purge
module load 2021
module load Anaconda3/2021.05
conda activate relfm-v1.0
You can check if the packages are installed correctly by running (on the login node):
python setup/check_packages.py
To check it on a cluster GPU, run:
sbatch jobscripts/check_packages.sh
⌛ This step takes < 1 min.
Further, before running any code, please set the PYTHONPATH
as follows:
# navigate to the directory where the code is located
cd /path/to/repo/
export PYTHONPATH=$PWD:$PWD/lib/r2d2/
We provide checkpoints to models trained on the Aachen dataset following R2D2. The key models are tabulated as follows:
Model | Description | Checkpoint |
---|---|---|
R2D2 | R2D2 baseline | r2d2_WASF_N16.pt |
C3 | Discrete group C3 model from C-3PO family | finalmodelC3_epoch_2_4x16_1x32_1x64_2x128.pt |
C4 | Discrete group C4 model from C-3PO family | finalmodelC4_epoch_5_4x16_1x32_1x64_2x128.pt |
C8 | Discrete group C8 model from C-3PO family | finalmodelC8_epoch_1_4x16_1x32_1x64_2x128.pt |
SO2 | Continuous group SO2 model from C-3PO family | finalmodelSO2_epoch_17_4x16_1x32_1x64_2x128.pt |
Note that the equivariant models are selected based on early stopping and in general, they converge faster than the non-equivariant models.
The performance across varying rotations is shown in the Figure below.
In order to evaluate our pre-trained models on the HPatches dataset, please follow these steps. For compactness, we provide steps for R2D2 and our SO(2) model. But the steps apply more generally. Note that since checkpoints are small, we provide them within the repo.
Download the dataset by:
mkdir -p $HOME/datasets/
cd $HOME/datasets/
wget http://icvl.ee.ic.ac.uk/vbalnt/hpatches/hpatches-sequences-release.tar.gz
tar -xvf hpatches-sequences-release.tar.gz
rm -rf hpatches-sequences-release.tar.gz
This shall create a folder hpatches-sequences-release
in $HOME/datasets/
.
Then, symlink it as a folder within the repo.
(Don't forget to set the path to repo correctly)
cd /path/to/repo/
ln -s $HOME/datasets data
# check out the dataset
ls data/hpatches-sequences-release/
You can check out sample images from the dataset.
Activate the environment.
# NOTE: The following three commands are specific to our cluster and may not be needed on your system.
module purge
module load 2021
module load Anaconda3/2021.05
conda activate relfm-v1.0
# incase it is not already installed
pip install ipython
Look at a sample.
(relfm-v1.0) $ ipython
In [1]: %matplotlib inline
In [2]: from PIL import Image
In [3]: from os.path import expanduser, join
In [4]: path = join(expanduser("~"), "datasets/hpatches-sequences-release/v_yard/1.ppm")
In [5]: assert exists(path), f"File does not exist at {path}. Are you sure the dataset is downloaded correctly?"
In [6]: img = Image.open(path)
In [7]: img.show() # this may not open on Lisa since it is not a GUI terminal
In [8]: img.save("hpatches_sample.png") # you can save and then visualize it in VS Code
In [9]: quit()
In this step, the model is ran on each sample in the dataset and the predictions are stored. We recommend running this step in Lisa since it takes a while.
Inference for R2D2 baseline
First, run inference for R2D2 model.
export PYTHONPATH=$PWD:$PWD/lib/r2d2/
sbatch jobscripts/inference_r2d2_on_hpatches.job
This will run inference, generate outputs and save them to the folder:
$HOME/outputs/rotation-equivariant-lfm/hpatches/r2d2_WASF_N16
. Depending on your checkpoint name, it will create a new folder for a new checkpoint.
The output shall have 1 folder per image sequence in HPatches, for e.g., v_home
. Each folder shall contain the following files:
-
1_rotation_R.npy
: the predicted keypoint locations and descriptor vectors for the source image. -
t_rotation_R.npy
: the predicted keypoint locations and descriptor vectors for the target image with indext
and rotationR
,$\forall t \in \{2, 3, 4, 5, 6\}, R \in \{0, 15, 30, .., 345, 360\}$ .
⌛ This step takes about 5 minutes.
Inference for our C-3PO (SO(2)) model
Note that in the jobscript, we have set the default checkpoint path to the R2D2 checkpoint. To run inference for
SO(2) model, change the ckpt
variable in jobscripts/inference_r2d2_on_hpatches.job
to ckpt=trained_models/finalmodelSO2_epoch_17_4x16_1x32_1x64_2x128.pt
. Then, run
sbatch jobscripts/inference_r2d2_on_hpatches.job
This should generate outputs in $HOME/outputs/rotation-equivariant-lfm/hpatches/finalmodelSO2_epoch_17_4x16_1x32_1x64_2x128
.
⌛ This step takes about 2 hours.
In this step, we use the predictions from the previous step to generate the results.
module purge
module load 2021
module load Anaconda3/2021.05
conda activate relfm-v1.0
export PYTHONPATH=$PWD:$PWD/lib/r2d2/
python relfm/eval/r2d2_on_hpatches.py --quantitative --models R2D2 "SO(2)"
⌛ This step takes about 1 hour. To speed up the runtime, you can run it as a Slurm process.
sbatch jobscripts/evaluation_r2d2_on_hpatches.job
In case you want to include the C4 variant, you can run the following command.
python relfm/eval/r2d2_on_hpatches.py --quantitative --models R2D2 "SO(2)" "C_{4}"
To generative qualitative results, you can run the following command.
python relfm/eval/r2d2_on_hpatches.py --qualitative --models R2D2 "SO(2)" --sequence_to_visualize i_castle --rotation_to_visualize 90
This will generate a figure at ./Figures/qual_results_rotation_i_castle_90.pdf
.
A sample qualitative result is shown in the Figure below.
The first two columns show the detected keypoints and the last two columns show (correct) matches in green.Note: Our evaluation is partly based on the notebook provided by D2-Net.
The training steps are similar to those for evaluation. However, you need to download additional datasets such as Aachen.
- Activate the environment:
# activate the environment conda activate relfm-v1.0 # set the python path export PYTHONPATH=$PWD/lib/r2d2/:$PWD
- Download Aachen dataset: here, you can pass the path to the root data folder. For e.g,
$HOME/datasets/
.This will download all required datasets tobash download_aachen.sh -d /path/to/root/data/folder/
/path/to/root/data/folder/
. Note that we symlink this root data folder to$PWD/data/
through the script, i.e., you can checkout data folder directly from$PWD/data/
. - Note that the repo comes with R2D2 code in
lib/r2d2
. So no need to download the code. - Training R2D2: Run the following command:
You can check the progress of your job via the slurm output file. You check job status via
sbatch jobscripts/r2d2_training.job
squeue | grep $USER
. Note that this is only a sample run and will save a model at/home/$USER/models/r2d2-sample/model.pt
.