About Flux LoRA #1038
Replies: 22 comments 44 replies
-
Hi there. i'll start with the Impressionist Landscape LoRA for Flux, direct link i've tried with the flux1-dev-fp8.safetensors and nothing happened, exact same picture with and without LoRA, same seed and parameters obviously. prompt was :
|
Beta Was this translation helpful? Give feedback.
-
Any base XLabs-AI lora's not working. Link: For example, anime_lora_comfy_converted.safetensors is Ok. But anime_lora.safetensors - this error: |
Beta Was this translation helpful? Give feedback.
-
Tested on my Lora I created yesterday using ostris AI Toolkit (local PC 24GB VRAM). Generated with mklanFluxDevV1FP8_mklanFluxv1 model, works great! Thanks! |
Beta Was this translation helpful? Give feedback.
-
so all regular Flux loras are now working on nf4? that's amazing if so, you're a genius Illya~ |
Beta Was this translation helpful? Give feedback.
-
It's not working for me |
Beta Was this translation helpful? Give feedback.
-
Patching LoRAs: 80% then it crashes every time. Lora is 164mb. using dev-nf4 (a 30mb lora works fine) |
Beta Was this translation helpful? Give feedback.
-
I trained a LoRa on Civitai (using Kohya Method and Flux Dev, 18mb LoRa size). I am trying to use this LoRa in Forge, but Forge gives me some errors during LoRa Patching. It gets stuck at step 271/304 and freezes my browser. (System: RTX3060 12GB, 32GB ram) LOG --> Startup time: 241.2s (prepare environment: 49.4s, launcher: 39.3s, import torch: 62.4s, setup paths: 0.2s, initialize shared: 3.5s, other imports: 26.0s, setup gfpgan: 0.5s, list extensions: 0.3s, list SD models: 2.7s, load scripts: 34.5s, load upscalers: 0.1s, initialize extra networks: 9.6s, cleanup temp dir: 6.9s, create ui: 6.1s, gradio launch: 2.0s). |
Beta Was this translation helpful? Give feedback.
-
I downloaded these two nsfw loras ( for testing ;-)) and non of them seems to have any effect on the picture. The resuld is 100% the same like without the lora https://civitai.com/models/642272/flux-topless?modelVersionId=718406 I did use the And I get a |
Beta Was this translation helpful? Give feedback.
-
Can you add CPU swap for the LoRa Patching? still crashing for me and i think that's the issue |
Beta Was this translation helpful? Give feedback.
-
i don't think LoRAs working i tried many flux loRAs and most of them doesn't work with the NF4 (despite the note of one and only native support), take this for example: https://civitai.com/models/633553 |
Beta Was this translation helpful? Give feedback.
-
Maybe you already know this, but I created a Lora using standard settings with ai-toolkit (https://github.com/ostris/ai-toolkit) |
Beta Was this translation helpful? Give feedback.
-
loras work with flux dev nf4 v2? |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
The LoRA patching takes quite a while for each generation. Is there a way to cache the patch once it has done it once for a particular model, or is that not possible? |
Beta Was this translation helpful? Give feedback.
-
I tested several lora models and none of them effect. I created several images with the same seed, with different model times and nothing changed. Where is the error? Thank in advance |
Beta Was this translation helpful? Give feedback.
-
I created a Flux LoRA using CivitAI's generator. The model they use is "Flux.1 D", i.e. the DEV one. It took me a while to find a way to use it in Forge, so I'm posting this in case it's of help to anyone trying something similar. Using this version of Forge at the moment: The only non-crashing reasonable-time way of using the LoRA on my system is with these settings: Forge console shows this about patching the LoRA: I've made a With-Without comparison image to show that the LoRA is having an obvious effect (although I had to increase the weight to 1.4). The Flux checkpoint I'm using is Flux.1-Dev GGUF Q4.0, that I got on CivitAI here: The With-Without image is in my CivitAI post here (SFW): |
Beta Was this translation helpful? Give feedback.
-
does lora for flux works now? |
Beta Was this translation helpful? Give feedback.
-
is it neccessary that the patching has to be done before every inference? can loras be prepatched in an nf4/fp8 compatible format? i am asking cause on my setup with a 3060 12gb it takes 150s to prepatch the lora and the whole system freezes from time to time during that patching process @lllyasviel |
Beta Was this translation helpful? Give feedback.
-
If no one has said it yet I wanted to share that I solved it by installing the flux1-dev-fp8.safetensors model from lllyasviel flux found in the following link https://huggingface.co/lllyasviel/flux1_dev/tree/main |
Beta Was this translation helpful? Give feedback.
-
Anyone got this loRA to work, it takes forever to process any prompt when i usse this one : https://civitai.com/models/430687/detailifier-fluxsdxlponysd15?modelVersionId=739154 |
Beta Was this translation helpful? Give feedback.
-
Man I'm real tired of my SDXL loras reloading every single time, this makes generating charts comparing lora versions a suffering, so much HDD noise and pauses between the rows. Why can't you just hold them in RAM, why doesn't OS do it? I've seen entire checkpoints being cached in RAM seeing how fast and effortlessly they're changed, but not some 200-mb lora files? |
Beta Was this translation helpful? Give feedback.
-
How to report LoRAs that do not work
Recently many Flux training codebases are very dynamic and lora formats maybe somewhat complicated. If you find that some LoRAs that do not work. Please give BOTH:
in this post.
How to Skip "Patching LoRAs" & How to make LoRAs more precise on low-bit models
If you use:
Then it means you will bake (precompute) LoRAs to same precision to your diffusion model. If your model is NF4, then your LoRAs will be merged to that model, and everything are still NF4. Patching this may take some time. But diffusion speed will not change after the patching.
If you use:
Then it means your LoRA will always use higher precision, no matter what precision your base model is. This will need to compute LoRAs on-the-fly in every diffusion iteration. If you use one single LoRA, diffusion will only be a bit slower. If you use multiple LoRAs, diffusion will be largely slower. However, you can use this option to skip "Patching LoRAs" since you do not need precomputed patches now.
How to only load LoRA one time rather than each generation
If you do not change LoRA weights, it will only load one time. If not, report a BUG in issue please.
Beta Was this translation helpful? Give feedback.
All reactions