SVFR is a unified framework for face video restoration that supports tasks such as BFR, Colorization, Inpainting, and their combinations within one cohesive system.
Case1 | Case2 |
---|---|
case1_bfr.mp4 |
case4_bfr.mp4 |
Case3 | Case4 |
---|---|
case10_bfr_colorization.mp4 |
case12_bfr_colorization.mp4 |
Case5 | Case6 |
---|---|
case14_bfr+colorization+inpainting.mp4 |
case15_bfr+colorization+inpainting.mp4 |
- [2025.01.02]: We released the initial version of the inference code and models. Stay tuned for continuous updates!
- [2024.12.17]: This repo is created!
Use the following command to install a conda environment for SVFR from scratch:
conda create -n svfr python=3.9 -y
conda activate svfr
Install PyTorch: make sure to select the appropriate CUDA version based on your hardware, for example,
pip install torch==2.2.2 torchvision==0.17.2 torchaudio==2.2.2
Install Dependencies:
pip install -r requirements.txt
conda install git-lfs
git lfs install
git clone https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt models/stable-video-diffusion-img2vid-xt
You can download checkpoints manually through link on Google Drive.
Put checkpoints as follows:
└── models
├── face_align
│ ├── yoloface_v5m.pt
├── face_restoration
│ ├── unet.pth
│ ├── id_linear.pth
│ ├── insightface_glint360k.pth
└── stable-video-diffusion-img2vid-xt
├── vae
├── scheduler
└── ...
python3 infer.py \
--config config/infer.yaml \
--task_ids 0 \
--input_path ./assert/lq/lq1.mp4 \
--output_dir ./results/
0 -- bfr
1 -- colorization
2 -- inpainting
0,1 -- bfr and colorization
0,1,2 -- bfr and colorization and inpainting
...
# For Inference with Inpainting
# Add '--mask_path' if you need to specify the mask file.
python3 infer.py \
--config config/infer.yaml \
--task_ids 0,1,2 \
--input_path ./assert/lq/lq3.mp4 \
--output_dir ./results/
--mask_path ./assert/mask/lq3.png
@misc{wang2025svfrunifiedframeworkgeneralized,
title={SVFR: A Unified Framework for Generalized Video Face Restoration},
author={Zhiyao Wang and Xu Chen and Chengming Xu and Junwei Zhu and Xiaobin Hu and Jiangning Zhang and Chengjie Wang and Yuqi Liu and Yiyi Zhou and Rongrong Ji},
year={2025},
eprint={2501.01235},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2501.01235},
}