📖 SMFAN: A Lightweight Self-Modulation Feature Aggregation Network for Efficient Image Super-Resolution
An overview of the proposed SMFAN.
- Run the [
run.sh
] script (./run.sh)CUDA_VISIBLE_DEVICES=0 python test_demo.py --data_dir [path to your data dir] --save_dir [path to your save dir] --model_id 24
- Be sure the change the directories
--data_dir
and--save_dir
.
- Be sure the change the directories
from utils.model_summary import get_model_flops, get_model_activation
from models.team24_smfan import SMFAN
model = SMFAN(dim=24, n_blocks=8, ffn_scale=1.5, upscaling_factor=4, bias=False)
input_dim = (3, 256, 256) # set the input dimension
activations, num_conv = get_model_activation(model, input_dim)
activations = activations / 10 ** 6
print("{:>16s} : {:<.4f} [M]".format("#Activations", activations))
print("{:>16s} : {:<d}".format("#Conv2d", num_conv))
flops = get_model_flops(model, input_dim, False)
flops = flops / 10 ** 9
print("{:>16s} : {:<.4f} [G]".format("FLOPs", flops))
num_parameters = sum(map(lambda x: x.numel(), model.parameters()))
num_parameters = num_parameters / 10 ** 6
print("{:>16s} : {:<.4f} [M]".format("#Params", num_parameters))
Our team (VPEG_C) placed 2nd and 3rd in the FLOPs and Parameters sub-track of the NTIRE 2024 ESR Challenge.