PCAN: 3D Attention Map Learning Using Contextual Information for Point Cloud Based Retrieval CVPR 2019
Wenxiao Zhang and Chunxia Xiao
Wuhan University
PCAN is an attention module for point cloud based retrieval, which can predict the significance of each local point feature based on point context. This work is based on PointNetVLAD and Pointnet++.
We implement a pytorch version in another project in models/PCAN.py
. You can check it if needed.
- Python3
- CUDA
- Tensorflow
- Scipy
- Pandas
- Sklearn
For attention map visualization, matlab is also needed.
The TF operators are included under tf_ops, you need to compile them (check tf_xxx_compile.sh under each ops subfolder) first. Refer to Pointnet++ for more details.
Please refer to PointNetVLAD.
To train our network, run the following command:
python train.py
To evaluate the model, run the following command:
python evaluate.py
The pre-trained models for both the baseline and refined networks can be downloaded here.
For visualization, you can run the visualization/show_attention_map.m
using matlab to visulize the attention map. We provide a weight file of a point cloud in oxford_weights
folder.
To produce the weights of all the point cloud, you can run the following command:
python evaluate_save_weights.py
The the weights will be saved in .bin files in datasetname_weights
folder.
You can also use the python lib mpl_toolkits.mplot3d
for visualization.
If you want to produce the same visualization results in the paper, please use this model which is an earlier trained refined model when we submited the paper.
If you find our code useful, please cite our paper:
@inproceedings{zhang2019pcan,
title={PCAN: 3D attention map learning using contextual information for point cloud based retrieval},
author={Zhang, Wenxiao and Xiao, Chunxia},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={12436--12445},
year={2019}
}
Feel free to contact me if you have any questions. [email protected]