This repo contains the code and results of the AAAI 2021 paper:
Split then Refine: Stacked Attention-guided ResUNets for Blind Single Image Visible Watermark Removal
Xiaodong Cun, Chi-Man Pun*
University of Macau
Datasets | Models | Paper | 🔥Online Demo!(Google CoLab)
The overview of the proposed two-stage framework. Firstly, we propose a multi-task network, SplitNet, for watermark detection, removal, and recovery. Then, we propose the RefineNet to smooth the learned region with the predicted mask and the recovered background from the previous stage. As a consequence, our network can be trained in an end-to-end fashion without any manual intervention. Note that, for clarity, we do not show any skip-connections between all the encoders and decoders.
The whole project will be released in the January of 2021 (almost).
We synthesized four different datasets for training and testing, you can download the dataset via huggingface.
Other Pre-trained Models are still reorganizing and uploading, it will be released soon.
An easy-to-use online demo can be founded in google colab.
The local demo will be released soon.
pip install -r requirements.txt
Besides training our methods, here, we also give an example of how to train the s2am under our framework. More details can be found in the shell scripts.
bash examples/evaluation.sh
bash examples/test.sh
The author would like to thanks Nan Chen for her helpful discussion.
Part of the code is based upon our previous work on image harmonization s2am
If you find our work useful in your research, please consider citing:
@misc{cun2020split,
title={Split then Refine: Stacked Attention-guided ResUNets for Blind Single Image Visible Watermark Removal},
author={Xiaodong Cun and Chi-Man Pun},
year={2020},
eprint={2012.07007},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
Please contact me if there is any question (Xiaodong Cun [email protected])