Skip to content

Latest commit

 

History

History
167 lines (145 loc) · 5.87 KB

benchmark.md

File metadata and controls

167 lines (145 loc) · 5.87 KB

Training and Evaluation

Attention! Due to the presence of three misaligned images in the SID Sony dataset (with scene IDs 10034, 10045, and 10172), our testing results in the article are based on excluding these images from the dataset. The txt file used for testing can be found and downloaded from the Google Drive (Sony_new_test_list.txt).

If you want to reproduce the metrics mentioned in the paper, please download the aforementioned txt file and place it in the dataset/sid/ directory.

Data Preparation

All the txt files for training and testing can be found in Google Drive.

Dataset 🔗 Source Conf. Shot on CFA Pattern
SID Sony Learning to see in the dark (dataset only) CVPR2018 Sony A7S2 Bayer (RGGB)
SID Fuji Learning to see in the dark (dataset only) CVPR2018 Fuji X-T2 X-Trans
MCR Abandoning the Bayer-Filter to See in the Dark (dataset only) CVPR2022 MT9M001C12STC Bayer (RGGB)

After download all the above datasets, you could symbol link them to the dataset folder.

mkdir dataset && cd dataset
ln -s your/path/to/SID ./sid
ln -s your/path/to/MCR ./mcr

Or just put them directly in the dataset folder.

Acceleration

Directly training with the RAW format leads to a bottleneck on cpu. Thus we preprocess the SID data into numpy array for acceleration.
Once you put the dataset in the correct place, you could simply preprocess them with the following command:

bash scripts/preprocess_sid.sh

Or you could preprocess with our preprocess scripts:

python scripts/preprocess/preprocess_sid.py --data-path [SID_DATA_PATH] --camera [CAM_MODEL] --split [EXP_TIME]
# [CAM_MODEL] in {Sony, Fuji}
# [EXP_TIME]  in {long, short}

# A simple example
python scripts/preprocess/preprocess_sid.py --data-path dataset/sid --camera Sony --split long

After all the preprocess, the final data folder should be orgnized like:

├── sid
│   ├── Fuji
│   │   ├── long
│   │   ├── long_pack
│   │   ├── long_post_int
│   │   ├── short
│   │   └── short_pack
│   ├── Sony
│   │   ├── long
│   │   ├── long_pack
│   │   ├── long_post_int
│   │   ├── short
│   │   └── short_pack
│   ├── Fuji_test_list.txt
│   ├── Fuji_train_list.txt
│   ├── Fuji_val_list.txt
│   ├── Sony_new_test_list.txt
│   ├── Sony_test_list.txt
│   ├── Sony_train_list.txt
│   └── Sony_val_list.txt
└── mcr
    ├── Mono_Colored_RAW_Paired_DATASET
    │   ├── Color_RAW_Input
    │   ├── Mono_GT
    │   └── RGB_GT
    ├── MCR_test_list.txt
    └── MCR_train_list.txt

Pretrained Models

In this section, you should download the pretrained model and put them into pretrained folder for evalution.

Trained on 🔗 Download Links Config file CFA Pattern
SID Sony [Google Drive][Baidu Cloud] [configs/cvpr/sony/baseline] Bayer (RGGB)
SID Fuji [Google Drive][Baidu Cloud] [configs/cvpr/fuji/baseline] X-Trans
MCR [Google Drive][Baidu Cloud] [configs/cvpr/mcr/baseline] Bayer (RGGB)

Evaluation

Shell scripts has been provided for benchmarking our DNF on different dataset.

bash benchmarks/[SCRIPT] [CKPT]
# [SCRIPT] in {mcr.sh, sid_sony.sh, sid_fuji.sh} determine the dataset.
# [CKPT] denotes the pretrained checkpoint.
# If you would like to save image while evaluation, you could just add `--save-image` option at the last.

# A simple example.
# To benchmark DNF on SID Sony dataset, and save the result.
bash benchmarks/sid_sony.sh pretrained/sid_sony.pth --save-image

Training

Training from scratch!

# Just use your config file!
python runner.py -cfg [CFG]