Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang, Ling Shao
-
A lightweight, fast and extended version of MIRNet is accepted in TPAMI. Paper | Code
-
Keras Tutorial on MIRNet is available at https://keras.io/examples/vision/mirnet/
-
Video on Tensorflow Youtube channel https://youtu.be/BMza5yrwZ9s
-
Links to (unofficial) implementations are added here
Abstract: With the goal of recovering high-quality image content from its degraded version, image restoration enjoys numerous applications, such as in surveillance, computational photography, medical imaging, and remote sensing. Recently, convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task. Existing CNN-based methods typically operate either on full-resolution or on progressively low-resolution representations. In the former case, spatially precise but contextually less robust results are achieved, while in the latter case, semantically reliable but spatially less accurate outputs are generated. In this paper, we present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network, and receiving strong contextual information from the low-resolution representations. The core of our approach is a multi-scale residual block containing several key elements: (a) parallel multi-resolution convolution streams for extracting multi-scale features, (b) information exchange across the multi-resolution streams, (c) spatial and channel attention mechanisms for capturing contextual information, and (d) attention based multi-scale feature aggregation. In the nutshell, our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details. Extensive experiments on five real image benchmark datasets demonstrate that our method, named as MIRNet, achieves state-of-the-art results for a variety of image processing tasks, including image denoising, super-resolution and image enhancement.
Network Architecture (click to expand)
Selective Kernel Feature Fusion (SKFF) |
Downsampling Module |
Dual Attention Unit (DAU) |
Upsampling Module |
The model is built in PyTorch 1.1.0 and tested on Ubuntu 16.04 environment (Python3.7, CUDA9.0, cuDNN7.5).
For installing, follow these intructions
sudo apt-get install cmake build-essential libjpeg-dev libpng-dev
conda create -n pytorch1 python=3.7
conda activate pytorch1
conda install pytorch=1.1 torchvision=0.3 cudatoolkit=9.0 -c pytorch
pip install matplotlib scikit-image opencv-python yacs joblib natsort h5py tqdm
- Download the SIDD-Medium dataset from here
- Generate image patches
python generate_patches_SIDD.py --ps 256 --num_patches 300 --num_cores 10
-
Download validation images of SIDD and place them in
../SIDD_patches/val
-
Install warmup scheduler
cd pytorch-gradual-warmup-lr; python setup.py install; cd ..
- Train your model with default arguments by running
python train_denoising.py
Note: Our model is trained with 2 Nvidia Tesla-V100 GPUs. See #5 for changing the model parameters.
You can download, at once, the complete repository of MIRNet (including pre-trained models, datasets, results, etc) from this Google Drive link, or evaluate individual tasks with the following instructions:
- Download the model and place it in ./pretrained_models/denoising/
- Download sRGB images of SIDD and place them in ./datasets/sidd/
- Run
python test_sidd_rgb.py --save_images
- Download sRGB images of DND and place them in ./datasets/dnd/
- Run
python test_dnd_rgb.py --save_images
- Download the models and place them in ./pretrained_models/super_resolution/
- Download images of different scaling factor and place them in ./datasets/super_resolution/
- Run
python test_super_resolution.py --save_images --scale 3
python test_super_resolution.py --save_images --scale 4
- Download the LOL model and place it in ./pretrained_models/enhancement/
- Download images of LOL dataset and place them in ./datasets/lol/
- Run
python test_enhancement.py --save_images --input_dir ./datasets/lol/ --result_dir ./results/enhancement/lol/ --weights ./pretrained_models/enhancement/model_lol.pth
- Download the FiveK model and place it in ./pretrained_models/enhancement/
- Download some sample images of fiveK dataset and place them in ./datasets/fivek_sample_images/
- Run
python test_enhancement.py --save_images --input_dir ./datasets/fivek_sample_images/ --result_dir ./results/enhancement/fivek/ --weights ./pretrained_models/enhancement/model_fivek.pth
Experiments are performed on five real image datasets for different image processing tasks including, image denoising, super-resolution and image enhancement. Images produced by MIRNet can be downloaded from Google Drive link.
- Tensorflow (Soumik Rakshit)
- Tensorflow-JS (Rishit Dagli)
- Tensorflow-TFLite (Sayak Paul)
If you use MIRNet, please consider citing:
@inproceedings{Zamir2020MIRNet,
title={Learning Enriched Features for Real Image Restoration and Enhancement},
author={Syed Waqas Zamir and Aditya Arora and Salman Khan and Munawar Hayat
and Fahad Shahbaz Khan and Ming-Hsuan Yang and Ling Shao},
booktitle={ECCV},
year={2020}
}
Should you have any question, please contact [email protected]
- Learning Enriched Features for Fast Image Restoration and Enhancement, TPAMI 2022. Paper | Code
- Restormer: Efficient Transformer for High-Resolution Image Restoration, CVPR 2022. Paper | Code
- Multi-Stage Progressive Image Restoration, CVPR 2021. Paper | Code
- CycleISP: Real Image Restoration via Improved Data Synthesis, CVPR 2020. Paper | Code