Skip to content

[CVPR 2024] Official implementation of "DEADiff: An Efficient Stylization Diffusion Model with Disentangled Representations"

License

Notifications You must be signed in to change notification settings

bytedance/DEADiff

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DEADiff: An Efficient Stylization Diffusion Model with Disentangled Representations (CVPR 2024)

           

Tianhao Qi*, Shancheng Fang, Yanze Wu✝, Hongtao Xie✉, Jiawei Liu,
Lang Chen, Qian He, Yongdong Zhang


(*Works done during the internship at ByteDance, ✝Project Lead, ✉Corresponding author)

From University of Science and Technology of China and ByteDance.

🔆 Introduction

TL;DR: We propose DEADiff, a generic method facilitating the synthesis of novel images that embody the style of a given reference image and adhere to text prompts.

⭐⭐ Stylized Text-to-Image Generation.

Stylized text-to-image results. Resolution: 512 x 512. (Compressed)

📝 Changelog

  • [2024.4.3]: 🔥🔥 Release the inference code and pretrained checkpoint.
  • [2024.3.5]: 🔥🔥 Release the project page.

⏳ TODO

  • Release the inference code.
  • Release training data.

⚙️ Setup

conda create -n deadiff python=3.9.2
conda activate deadiff
conda install pytorch==2.0.0 torchvision==0.15.0 torchaudio==2.0.0 pytorch-cuda=11.8 -c pytorch -c nvidia
pip install git+https://github.com/salesforce/LAVIS.git@20230801-blip-diffusion-edit
pip install -r requirements.txt
pip install -e .

💫 Inference

  1. Download the pretrained model from Hugging Face and put it under ./pretrained/.
  2. Run the commands in terminal.
python3 scripts/app.py

The Gradio app allows you to transfer style from the reference image. Just try it for more details.

Prompt: "A curly-haired boy" p

Prompt: "A robot" p

Prompt: "A motorcycle" p

📢 Disclaimer

We develop this repository for RESEARCH purposes, so it can only be used for personal/research/non-commercial purposes.


✈️ Citation

@article{qi2024deadiff,
  title={DEADiff: An Efficient Stylization Diffusion Model with Disentangled Representations},
  author={Qi, Tianhao and Fang, Shancheng and Wu, Yanze and Xie, Hongtao and Liu, Jiawei and Chen, Lang and He, Qian and Zhang, Yongdong},
  journal={arXiv preprint arXiv:2403.06951},
  year={2024}
}

📭 Contact

If your have any comments or questions, feel free to contact [email protected]

About

[CVPR 2024] Official implementation of "DEADiff: An Efficient Stylization Diffusion Model with Disentangled Representations"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published