This is a deep-learning based pan-sharpening code package, we reimplemented include PNN, MSDCNN, PanNet, TFNet, SRPPNN, and our purposed network DIPNet.
This code has been tested on on a personal computer with AMD Ryzen 5 3600 3.6GHz processor, 32-GB RAM, and an NVIDIA RTX2070Super graphic card, Python 3.7.4, Pytorch-1.6.0, CUDA 10.2, cuDNN 8.0.2.
If you wanted to run our code package in your host, please use the following code in our depository to complete the runtime library.
pip install -r requirements.txt
https://pan.baidu.com/s/1bvuWyIagBcFzNTp4l1eImA ExtractCode:beqg
https://pan.baidu.com/s/1Gfe6-SGT9AKHD6NzAZyJzA ExtractCode:x0wx
https://pan.baidu.com/s/11rJcRgW0n0OSimzuokH-6Q ExtractCode:4ky4
https://drive.google.com/drive/folders/1pCaxm0onGBPDWrpBSMD2d5sbc_1iMsCB?usp=sharing
- You should use the code to clip the image by './data/clip_dataset.py'
- To train, you should split the dataset by random using the code './data/split_dataset.py
- Use the code from './MATLAB/custum/degrade_dataset.m' to create the simulate dataset
https://drive.google.com/drive/folders/11bzOwX8AUP0fldpZ_oUVExvY9HUPg8Yt?usp=sharing
You should put the downloaded pretrained models under one satellite dataset model directory and the related method directory, such as 'QuickBird/checkpoints_dir/deep_learning_method/supervised'.
Before testing or using, you must use the train/_test.py to get the setting of the parameters. As following:
python _test.py -m ours -ms F16B2 -d D:\\实验\\影像融合\\IKONOS\\train_dataset -td D:\\实验\\影像融合\\IKONOS\\test_dataset -sd D:\\实验\\影像融合\\Deep-Learning-PanSharpening\\results\\IKONOS\\result -cd D:\\实验\\影像融合\\Deep-Learning-PanSharpening\\checkpoints\\IKONOS\\checkpoints_dir
The detail of the setting can be seen by using --help. After running the file '_test.py', you can get two results in '/your_save_dir/deep_learning_method/supervised/full' and '/your_save_dir/deep_learning_method/supervised/reduce'.
Use the './MATLAB/custum/eval_DL_method.m' to evalute the result
If training by yourself, firstly you must set the parameters as following:
python train.py -m ours -ms F16B2 -d D:\\实验\\影像融合\\IKONOS\\train_dataset -td D:\\实验\\影像融合\\IKONOS\\test_dataset -sd D:\\实验\\影像融合\\Deep-Learning-PanSharpening\\results\\IKONOS\\result -cd D:\\实验\\影像融合\\Deep-Learning-PanSharpening\\checkpoints\\IKONOS\\checkpoints_dir
The detail of the setting can be seen by using --help. After running the file 'train.py', you can get a converge model parameter.
This code depository is refered to https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix , and https://github.com/zk31601102/ResidualDenseNetWork-CycleGAN-SuperResolution
If you use DIPNet in your work, please cite our paper:
@Article{rs13142800, AUTHOR = {Xie, Yuchen and Wu, Wei and Yang, Haiping and Wu, Ning and Shen, Ying}, TITLE = {Detail Information Prior Net for Remote Sensing Image Pansharpening}, JOURNAL = {Remote Sensing}, VOLUME = {13}, YEAR = {2021}, NUMBER = {14}, ARTICLE-NUMBER = {2800}, URL = {https://www.mdpi.com/2072-4292/13/14/2800}, ISSN = {2072-4292}, DOI = {10.3390/rs13142800} }