LoRA-Composer: Leveraging Low-Rank Adaptation for Multi-Concept Customization in Training-Free Diffusion Models
Yang Yang, Wen Wang, Liang Peng, Chaotian Song, Yao Chen, Hengjia Li, Xiaolong Yang, Qinglin Lu, Deng Cai, Wei Liu, Boxi Wu
- Memory optimization in optimized_branch (There is a lot of GPU memory redundancy in the existing code, and N concept LoRAs need to store N pipelines).
- Code Released.
- Python >= 3.9 (Recommend to use Anaconda or Miniconda)
- Diffusers==0.14.0
- PyTorch >= 1.12
- NVIDIA GPU (60G memory) + CUDA
- Install torch
conda create -n loracomposer python=3.10 -y
conda activate loracomposer
# install PyTorch
conda install pytorch==1.12.0 torchvision==0.13.0 torchaudio==0.12.0 cudatoolkit=11.3 -c pytorch -y
- Clone repo & install
git clone https://github.com/Young98CN/LoRA_Composer.git
cd LoRA_Composer
pip install .
# Install diffusers==0.14.0 with T2I-Adapter support(install from source)
git clone -b T2IAdapter-for-mixofshow https://github.com/guyuchao/diffusers-t2i-adapter.git
cd diffusers-t2i-adapter
pip install .
We adopt the ChilloutMix for real-world concepts, and Anything-v4 for anime concepts.
cd experiments/pretrained_models
# Diffusers-version ChilloutMix
git lfs clone https://huggingface.co/windwhinny/chilloutmix.git
# Diffusers-version Anything-v4
git lfs clone https://huggingface.co/xyn-ai/anything-v4.0.git --exclude="anything*, Anything*, example*, *.safetensors"
mkdir t2i_adapter
cd t2i_adapter
# sketch/openpose adapter of T2I-Adapter
wget https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_sketch_sd14v1.pth
wget https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_openpose_sd14v1.pth
If you want to quickly reimplement our methods, we provide the following resources used in the paper.
Paper Resources | Concept Datasets | Single-Concept ED-LoRAs |
---|---|---|
Download Link | Google Drive | Google Drive |
After downloading, the path should be arranged as follows:
LoRA_Composer
├── mixofshow
├── scripts
├── options
├── experiments
│ ├── composed_edlora # composed ED-LoRA
│ ├── pretrained_models
│ │ ├── anything-v4.0
│ │ ├── chilloutmix
│ │ ├── t2i_adpator/t2iadapter_*_sd14v1.pth
├── datasets
│ ├── data # ** Put the dataset in here **
│ │ ├── characters/
│ │ ├── objects/
│ │ ├── scenes/
│ ├── data_cfgs/lora_composer
│ │ ├── single-concept # train single-concept edlora configs
│ │ ├── region_lora # merge edlora configs
├── loras # ** Put Single-Concept ED-LoRAs in here **
│ ├── anime
│ ├── background
│ ├── real
...
- Merge the LoRA with the base model
# in the path of root folder
find scripts/lora_composer_scripts/merge_EDLoRA -type f -name "*.sh" -exec echo "Executing {}" \; -exec bash {} \;
python scripts/lora_composer_scripts/merge_EDLoRA/link_lora2folder.py
- Partial results of the paper can be obtained by running the following command (The ‘–image_guidance’ option is used to activate conditions such as pose and sketch.)
bash scripts/lora_composer_scripts/paper_result_scripts/lora_composer_anime.sh
bash scripts/lora_composer_scripts/paper_result_scripts/lora_composer_real.sh
- Follow by offical implement of ED-LoRA, or follow the newer release version(Recommend). (It is better for performence).
- Then merge the LoRA with the pre-trained model (For example Haibara_Ai).
# The newer release version should convert to the LoRA checkpoint fist
python scripts/lora_composer_scripts/convert_old_EDLoRA.py ${ckpt_path} ${save_path}
# Then merge the weight (For older versions, just use this command)
bash scripts/lora_composer_scripts/merge_EDLoRA/anime/merge_ai.sh
This project is released under the Apache 2.0 license.
This codebase builds on Mix-of-Show. Thanks for open-sourcing! Besides, we acknowledge following great open-sourcing projects:
- T2I-Adapter (https://github.com/TencentARC/T2I-Adapter).
@article{yang2024loracomposer,
title = {LoRA-Composer: Leveraging Low-Rank Adaptation for Multi-Concept Customization in Training-Free Diffusion Models},
author = {Yang Yang and Wen Wang and Liang Peng and Chaotian Song and Yao Chen and Hengjia Li and Xiaolong Yang and Qinglin Lu and Deng Cai and Boxi Wu and Wei Liu},
year = {2024},
journal = {arXiv preprint arXiv: 2403.11627}
}
If you have any questions and improvement suggestions, please email Yang Yang ([email protected]), or open an issue.