This repository contains the implementation of the following paper:
DARTS: Double Attention Reference-based Transformer for Super-resolution
Masoomeh Aslahishahri, Jordan Ubbens, Ian Stavness
[Paper]
-
Clone Repo
git clone https://github.com/bia006/DARTS.git
-
Create Conda Environment
conda create --name DARTS python=3.8 conda activate DARTS
-
Install Dependencies
cd DARTS pip install -r requirements.txt
- Train Set: CUFED Dataset
- Test Set: WR-SR Dataset, CUFED5 Dataset
Please refer to Datasets.md for pre-processing and more details.
Downloading the pretrained models from this link and put them under mmsr/checkpoints folder
.
We provide quick test code with the pretrained model.
-
Modify the paths to dataset and pretrained model in the following yaml files for configuration.
./options/test/test_DARTS.yml
-
Check out the results in
./results
.
All logging files in the training process, e.g., log message, checkpoints, and snapshots, will be saved to ./mmsr/checkpoints
and ./tb_logger
directory.
-
Modify the paths to dataset in the following yaml files for configuration.
./options/train/train_DARTS.yml
-
Train the transformer network.
python mmsr/train.py -opt "options/train/train_DATSR.yml"
For more results on the benchmarks, you can directly download our DARTS results from here.
If you find our repo useful for your research, please cite our paper.
This project is open sourced under MIT license. The code framework is mainly modified from StyleSwin. Please refer to the original repo for more usage and documents.
If you have any question, please feel free to contact us via [email protected]
.