Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sdxl support (for SAI/HuggingFace/diffuser/community models) #1952

Merged
merged 99 commits into from
Sep 4, 2023
Merged

Conversation

lllyasviel
Copy link
Collaborator

@lllyasviel lllyasviel commented Aug 23, 2023

@lllyasviel
Copy link
Collaborator Author

unfortunatly webui removed --test argument in AUTOMATIC1111/stable-diffusion-webui#10291
if we do not have anyone who understand what happens in that above PR, probably we will permanently disable all future github ci tests

@FurkanGozukara
Copy link

Announced in my twitter looking forward to this

thank you

https://twitter.com/GozukaraFurkan/status/1694303882652913995

@kohya-ss
Copy link
Contributor

kohya-ss commented Sep 4, 2023

Hello,
Thank you for your incredible work and for integrating ControlNet-LLLite support.

I don't usually apply multiple LLLites at once. However, I think there seem to be two ways to do so:

q = q + lllite1(q)
q = q + lllite2(q)
q = q + lllite1(q) + lllite2(q)

Upon testing, both methods seem to yield almost identical results. Currently, only method 1 is achievable with ComfyUI, but personally, I feel that method 2 might be more appropriate.

If the implementation seems challenging, I believe method 1 would be fine as well.

@lllyasviel
Copy link
Collaborator Author

2 is implemented

@lllyasviel
Copy link
Collaborator Author

multi ip-adapter is not very efficient in speed now and probably I should optimize it a bit

@JackEllie
Copy link

UnHook fixed tests work smoothly!!

@AugmentedRealityCat
Copy link

I tried to fix the unetHook problem - please try again

It works for me now ! Thanks a lot for the fix.

@AugmentedRealityCat
Copy link

AugmentedRealityCat commented Sep 4, 2023

Using HiRes Fix with ControlNet-LLLite gives an error message and does not complete the image generation process.

In the WebUI itself I get the following error message:
RuntimeError: Sizes of tensors must match except in dimension 2. Expected size 3744 but got size 14976 for tensor number 1 in the list.

Here is the full log.

venv "C:\stable-diffusion-webui\venvxformers\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.6.0-78-gd39440bf
Commit hash: d39440bfb9d3b20338fc23a78e6655b1e2f7c1d5
Installing pyqt5 requirement for depthmap script
Launching Web UI with arguments: --xformers
Civitai Helper: Get Custom Model Folder
Civitai Helper: Load setting from: C:\stable-diffusion-webui\extensions\Stable-Diffusion-Webui-Civitai-Helper\setting.json
Civitai Helper: No setting file, use default
Using cache found in C:\Users\User/.cache\torch\hub\isl-org_ZoeDepth_main
img_size [384, 512]
Using cache found in C:\Users\User/.cache\torch\hub\intel-isl_MiDaS_master
Params passed to Resize transform:
        width:  512
        height:  384
        resize_target:  True
        keep_aspect_ratio:  True
        ensure_multiple_of:  32
        resize_method:  minimal
Using pretrained resource url::https://github.com/isl-org/ZoeDepth/releases/download/v1.0/ZoeD_M12_N.pt
Loaded successfully
[-] ADetailer initialized. version: 23.9.1, num models: 9
2023-09-04 01:07:20,394 - ControlNet - INFO - ControlNet v1.1.400
ControlNet preprocessor location: C:\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads
2023-09-04 01:07:20,458 - ControlNet - INFO - ControlNet v1.1.400
Loading FABRIC v0.6.1
[Vec. CC] Style Sheet Loaded...
Loading weights [e6bb9ea85b] from C:\stable-diffusion-webui\models\Stable-diffusion\sd_xl_base_1.0_0.9vae.safetensors
WARNING:py.warnings:C:\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\fabric.py:182: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
  self.img2img_selected_display = gr.Image(value=None, type="pil", label="Selected image", visible=is_img2img).style(height=256)

WARNING:py.warnings:C:\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\fabric.py:183: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
  self.txt2img_selected_display = gr.Image(value=None, type="pil", label="Selected image", visible=not is_img2img).style(height=256)

WARNING:py.warnings:C:\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\fabric.py:190: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
  upload_img_input = gr.Image(type="pil", label="Upload image").style(height=256)

WARNING:py.warnings:C:\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\fabric.py:201: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
  like_gallery = gr.Gallery(label="Liked images", elem_id="fabric_like_gallery").style(columns=4, height=128)

WARNING:py.warnings:C:\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\fabric.py:207: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
  dislike_gallery = gr.Gallery(label="Disliked images", elem_id="fabric_dislike_gallery").style(columns=4, height=128)

add tab
WARNING:py.warnings:C:\stable-diffusion-webui\extensions\a1111-sd-zoe-depth\gradio_depth_pred.py:19: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
  input_image = gr.Image(label="Input Image", type='pil', elem_id='img-display-input').style(height="auto")

Creating model from config: C:\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 17.6s (prepare environment: 2.7s, import torch: 1.6s, import gradio: 0.5s, setup paths: 0.4s, initialize shared: 0.1s, other imports: 0.3s, list SD models: 0.4s, load scripts: 8.5s, create ui: 2.7s, gradio launch: 0.2s).
Applying attention optimization: xformers... done.
Model loaded in 6.7s (load weights from disk: 0.9s, create model: 0.2s, apply weights to model: 3.7s, load textual inversion embeddings: 1.1s, calculate empty prompt: 0.6s).
2023-09-04 01:08:18,184 - ControlNet - INFO - Loading model: kohya_controllllite_xl_canny_anime [7158f7e0]
2023-09-04 01:08:18,499 - ControlNet - INFO - Loaded state_dict from [C:\stable-diffusion-webui\extensions\sd-webui-controlnet\models\kohya_controllllite_xl_canny_anime.safetensors]
2023-09-04 01:08:18,744 - ControlNet - INFO - ControlNet model kohya_controllllite_xl_canny_anime [7158f7e0] loaded.
2023-09-04 01:08:18,750 - ControlNet - INFO - Loading preprocessor: canny
2023-09-04 01:08:18,750 - ControlNet - INFO - preprocessor resolution = 512
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:05<00:00,  3.80it/s]
  0%|                                                                                           | 0/20 [00:00<?, ?it/s]
*** Error completing request
*** Arguments: ('task(hkywzx79e5q9eie)', 'woman in street, fashion', 'anime, drawing, cartoon, bad, low quality', [], 20, 'DPM++ 2M SDE Karras', 1, 1, 7, 832, 1152, True, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x0000020E527B2620>, 0, False, 'sd_xl_refiner_1.0_0.9vae.safetensors [8d0ce6c016]', 0.6, 1234, False, -1, 0, 0, 0, 0, 0, 0, 0, 0.25, False, True, False, False, False, 'base', False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, True, False, 0, -1, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 3072, 192, True, True, True, False, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, 0, 16, 8, 'mm_sd_v15.ckpt', <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000020E527B18D0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000020E527B0880>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000020E527B3010>, [], [], False, 0, 0.8, 0, 0.8, 0.5, False, False, 0.5, 8192, -1.0, 0, False, False, 0, 1, 1, 0, 0, 0, 0, False, 'Straight Abs.', 'Flat', False, 0.75, 1, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50, True, False, 0, 'Range', 1, 'GPU', True, False, False, False, False, 0, 448, False, 448, False, False, 3, False, 3, True, 3, False, 'Horizontal', False, False, 'u2net', False, True, True, False, 0, 2.5, 'polylines_sharp', ['left-right', 'red-cyan-anaglyph'], 2, 0, '∯boost∯clipdepth∯clipdepth_far∯clipdepth_mode∯clipdepth_near∯compute_device∯do_output_depth∯gen_normalmap∯gen_rembg∯gen_simple_mesh∯gen_stereo∯model_type∯net_height∯net_size_match∯net_width∯normalmap_invert∯normalmap_post_blur∯normalmap_post_blur_kernel∯normalmap_pre_blur∯normalmap_pre_blur_kernel∯normalmap_sobel∯normalmap_sobel_kernel∯output_depth_combine∯output_depth_combine_axis∯output_depth_invert∯pre_depth_background_removal∯rembg_model∯save_background_removal_masks∯save_outputs∯simple_mesh_occlude∯simple_mesh_spherical∯stereo_balance∯stereo_divergence∯stereo_fill_algo∯stereo_modes∯stereo_offset_exponent∯stereo_separation') {}
    Traceback (most recent call last):
      File "C:\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "C:\stable-diffusion-webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "C:\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img
        processed = processing.process_images(p)
      File "C:\stable-diffusion-webui\modules\processing.py", line 732, in process_images
        res = process_images_inner(p)
      File "C:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "C:\stable-diffusion-webui\modules\processing.py", line 867, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "C:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 412, in process_sample
        return process.sample_before_CN_hack(*args, **kwargs)
      File "C:\stable-diffusion-webui\modules\processing.py", line 1156, in sample
        return self.sample_hr_pass(samples, decoded_samples, seeds, subseeds, subseed_strength, prompts)
      File "C:\stable-diffusion-webui\modules\processing.py", line 1242, in sample_hr_pass
        samples = self.sampler.sample_img2img(self, samples, noise, self.hr_c, self.hr_uc, steps=self.hr_second_pass_steps or self.steps, image_conditioning=image_conditioning)
      File "C:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 188, in sample_img2img
        samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\stable-diffusion-webui\modules\sd_samplers_common.py", line 261, in launch_sampling
        return func()
      File "C:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 188, in <lambda>
        samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\stable-diffusion-webui\venvxformers\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "C:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 626, in sample_dpmpp_2m_sde
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "C:\stable-diffusion-webui\venvxformers\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 169, in forward
        x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
      File "C:\stable-diffusion-webui\venvxformers\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "C:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "C:\stable-diffusion-webui\modules\sd_models_xl.py", line 37, in apply_model
        return self.model(x, t, cond)
      File "C:\stable-diffusion-webui\venvxformers\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "C:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
        return self.__orig_func(*args, **kwargs)
      File "C:\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\wrappers.py", line 28, in forward
        return self.diffusion_model(
      File "C:\stable-diffusion-webui\venvxformers\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 799, in forward_webui
        raise e
      File "C:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 796, in forward_webui
        return forward(*args, **kwargs)
      File "C:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 706, in forward
        h = module(h, emb, context)
      File "C:\stable-diffusion-webui\venvxformers\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 100, in forward
        x = layer(x, context)
      File "C:\stable-diffusion-webui\venvxformers\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\stable-diffusion-webui\repositories\generative-models\sgm\modules\attention.py", line 627, in forward
        x = block(x, context=context[i])
      File "C:\stable-diffusion-webui\venvxformers\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\stable-diffusion-webui\repositories\generative-models\sgm\modules\attention.py", line 459, in forward
        return checkpoint(
      File "C:\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\util.py", line 165, in checkpoint
        return CheckpointFunction.apply(func, len(inputs), *args)
      File "C:\stable-diffusion-webui\venvxformers\lib\site-packages\torch\autograd\function.py", line 506, in apply
        return super().apply(*args, **kwargs)  # type: ignore[misc]
      File "C:\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\util.py", line 182, in forward
        output_tensors = ctx.run_function(*ctx.input_tensors)
      File "C:\stable-diffusion-webui\repositories\generative-models\sgm\modules\attention.py", line 467, in _forward
        self.attn1(
      File "C:\stable-diffusion-webui\venvxformers\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\stable-diffusion-webui\modules\sd_hijack_optimizations.py", line 482, in xformers_attention_forward
        q_in = self.to_q(x)
      File "C:\stable-diffusion-webui\venvxformers\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\stable-diffusion-webui\venvxformers\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "C:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet_lllite.py", line 207, in forward
        hack = module(x) * weight
      File "C:\stable-diffusion-webui\venvxformers\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet_lllite.py", line 94, in forward
        cx = torch.cat([cx, self.down(x)], dim=1 if self.is_conv2d else 2)
    RuntimeError: Sizes of tensors must match except in dimension 2. Expected size 3744 but got size 14976 for tensor number 1 in the list.

---

EDIT: and here is a screenshot of the UI when I get the error message. This provides an overview of all the settings I was using.
Screenshot 2023-09-04 at 01-10-15 Stable Diffusion

@lllyasviel
Copy link
Collaborator Author

high res fix problem fixed

@lllyasviel
Copy link
Collaborator Author

I am thinking that should we just merge and then let people use and test and then update it in real time

Actually we have a very bad tradition that last time we update controlnet 1.1, all developments are in main branch and people just update - that is really bad tradition but that also make things moving really fast

But even in that case, I will test all SD1.5 functionalities before we really do that

@lllyasviel
Copy link
Collaborator Author

gosh you guys are really bad for upvote this

@ptmarks
Copy link

ptmarks commented Sep 4, 2023

Just saw that Auto1111 1.6 + Deforum + CN are working. I assume that is main, not sdxl? If you merge does anything change?

@lllyasviel
Copy link
Collaborator Author

Unfortunately a lot files are changed – but this is inevitable

@ptmarks
Copy link

ptmarks commented Sep 4, 2023

Unfortunately a lot files are changed – but this is inevitable

Yep, assumed so. I just tried a test of sdxl as my controlnet in my production Auto1111 + Deforum and it errored out.

Reading ControlNet 1 base frame #0 at C:\SDXL\stable-diffusion-webui\outputs\img2img-images\Deforum_BenV5XL\controlnet_1_inputframes\000000000.jpg
Reading ControlNet 2 base frame #0 at C:\SDXL\stable-diffusion-webui\outputs\img2img-images\Deforum_BenV5XL\controlnet_2_inputframes\000000000.jpg
2023-09-04 11:31:21,598 - ControlNet - INFO - Loading model: sai_xl_depth_256lora [73ad23d1]
2023-09-04 11:31:21,762 - ControlNet - INFO - Loaded state_dict from [C:\SDXL\stable-diffusion-webui\extensions\sd-webui-controlnet\models\sai_xl_depth_256lora.safetensors]
2023-09-04 11:31:21,763 - ControlNet - INFO - controlnet_sdxl_config (using lora)
2023-09-04 11:31:21,805 - ControlNet - INFO - ControlNet model sai_xl_depth_256lora [73ad23d1] loaded.
2023-09-04 11:31:21,828 - ControlNet - INFO - Loading preprocessor: depth
2023-09-04 11:31:21,828 - ControlNet - INFO - preprocessor resolution = 512
Downloading: "https://huggingface.co/lllyasviel/ControlNet/resolve/main/annotator/ckpts/dpt_hybrid-midas-501f0c75.pt" to C:\SDXL\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads\midas\dpt_hybrid-midas-501f0c75.pt

100%|████████████████████████████████████████████████████████████████████████████████| 470M/470M [00:04<00:00, 112MB/s]
*** Error running process: C:\SDXL\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py
Traceback (most recent call last):
File "C:\SDXL\stable-diffusion-webui\modules\scripts.py", line 619, in process
script.process(p, *script_args)
File "C:\SDXL\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 692, in process
model_net = Script.load_control_model(p, unet, unit.model)
File "C:\SDXL\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 320, in load_control_model
model_net = Script.build_control_model(p, unet, model)
File "C:\SDXL\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 338, in build_control_model
raise RuntimeError(f"model not found: {model}")
RuntimeError: model not found: None


0%| | 0/5 [00:00<?, ?it/s]

START OF TRACEBACK
Traceback (most recent call last):
File "C:\SDXL\stable-diffusion-webui\extensions\deforum-for-automatic1111-webui\scripts\deforum_helpers\run_deforum.py", line 110, in run_deforum
render_animation(args, anim_args, video_args, parseq_args, loop_args, controlnet_args, root)
File "C:\SDXL\stable-diffusion-webui\extensions\deforum-for-automatic1111-webui\scripts\deforum_helpers\render.py", line 575, in render_animation
image = generate(args, keys, anim_args, loop_args, controlnet_args, root, parseq_adapter, frame_idx, sampler_name=scheduled_sampler_name)
File "C:\SDXL\stable-diffusion-webui\extensions\deforum-for-automatic1111-webui\scripts\deforum_helpers\generate.py", line 76, in generate
return generate_inner(args, keys, anim_args, loop_args, controlnet_args, root, parseq_adapter, frame, sampler_name)
File "C:\SDXL\stable-diffusion-webui\extensions\deforum-for-automatic1111-webui\scripts\deforum_helpers\generate.py", line 279, in generate_inner
processed = processing.process_images(p)
File "C:\SDXL\stable-diffusion-webui\modules\processing.py", line 732, in process_images
res = process_images_inner(p)
File "C:\SDXL\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "C:\SDXL\stable-diffusion-webui\modules\processing.py", line 867, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "C:\SDXL\stable-diffusion-webui\modules\processing.py", line 1528, in sample
samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
File "C:\SDXL\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 188, in sample_img2img
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "C:\SDXL\stable-diffusion-webui\modules\sd_samplers_common.py", line 261, in launch_sampling
return func()
File "C:\SDXL\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 188, in
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "C:\SDXL\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\SDXL\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "C:\SDXL\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\SDXL\stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 201, in forward
devices.test_for_nans(x_out, "unet")
File "C:\SDXL\stable-diffusion-webui\modules\devices.py", line 136, in test_for_nans
raise NansException(message)
modules.devices.NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.
END OF TRACEBACK

User friendly error message:
Error: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.. Please, check your schedules/ init values.

@lllyasviel
Copy link
Collaborator Author

probably we add a note to the release note

@sirPhoebus
Copy link

Sorry for the dumb question but how do you force A1111 to use ControlNet v1.1.400 so I can test SDXL ?

@ptmarks
Copy link

ptmarks commented Sep 4, 2023

I added -no-half to see if I could get past this error, I did, but now when it runs it says it can't find the models:

Reading ControlNet 1 base frame #14 at C:\SDXL\stable-diffusion-webui\outputs\img2img-images\Deforum_BenV5XL\controlnet_1_inputframes\000000014.jpg
Reading ControlNet 2 base frame #14 at C:\SDXL\stable-diffusion-webui\outputs\img2img-images\Deforum_BenV5XL\controlnet_2_inputframes\000000014.jpg
2023-09-04 11:42:40,473 - ControlNet - INFO - Loading model: sai_xl_depth_256lora [73ad23d1]
2023-09-04 11:42:40,657 - ControlNet - INFO - Loaded state_dict from [C:\SDXL\stable-diffusion-webui\extensions\sd-webui-controlnet\models\sai_xl_depth_256lora.safetensors]
2023-09-04 11:42:40,657 - ControlNet - INFO - controlnet_sdxl_config (using lora)
2023-09-04 11:42:41,194 - ControlNet - INFO - ControlNet model sai_xl_depth_256lora [73ad23d1] loaded.
2023-09-04 11:42:41,228 - ControlNet - INFO - Loading preprocessor: depth
2023-09-04 11:42:41,228 - ControlNet - INFO - preprocessor resolution = 512
*** Error running process: C:\SDXL\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py
Traceback (most recent call last):
File "C:\SDXL\stable-diffusion-webui\modules\scripts.py", line 619, in process
script.process(p, *script_args)
File "C:\SDXL\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 692, in process
model_net = Script.load_control_model(p, unet, unit.model)
File "C:\SDXL\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 320, in load_control_model
model_net = Script.build_control_model(p, unet, model)
File "C:\SDXL\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 338, in build_control_model
raise RuntimeError(f"model not found: {model}")
RuntimeError: model not found: None

@MindJuice1
Copy link

Sorry for the dumb question but how do you force A1111 to use ControlNet v1.1.400 so I can test SDXL ?

Install from this github repo in webgui, with the specific branch name as "sdxl".

@lllyasviel
Copy link
Collaborator Author

we do not recommend users without git knowledges to participate in the test, since it is likely to break user environments and users' updating system

@lllyasviel
Copy link
Collaborator Author

ok all my tests passed - I am going to press the button soon

@lllyasviel lllyasviel marked this pull request as ready for review September 4, 2023 17:30
@lllyasviel lllyasviel merged commit 1d54023 into main Sep 4, 2023
@lllyasviel
Copy link
Collaborator Author

merged - see u in #2039

@lllyasviel lllyasviel deleted the sdxl branch September 4, 2023 17:43
@MindJuice1
Copy link

we do not recommend users without git knowledges to participate in the test, since it is likely to break user environments and users' updating system

ahh I see, my bad 👍

Push that button ! 👯

@AugmentedRealityCat
Copy link

AugmentedRealityCat commented Sep 4, 2023

@lllyasviel , have you seen the new OpenPose ControlNet LoRA from Thibaud that was released yesterday ?

https://huggingface.co/thibaud/controlnet-openpose-sdxl-1.0/blob/main/control-lora-openposeXL2-rank256.safetensors

And the new Blur ControlNets released by Kohya earlier today ?

https://huggingface.co/kohya-ss/controlnet-lllite/blob/main/controllllite_v01016032e_sdxl_blur_anime_beta.safetensors
https://huggingface.co/kohya-ss/controlnet-lllite/blob/main/controllllite_v01032064e_sdxl_blur-anime_500-1000.safetensors

EDIT: Kohya also refers to a new preprocessor:

The recommended preprocessing for the blur model is Gaussian blur.

Quote taken from: https://huggingface.co/kohya-ss/controlnet-lllite

Do you think these should be included in your ControlNet models collection ?

Are you planning to maintain that collection in the future ? Is that something you need help with ?

Repository owner locked as resolved and limited conversation to collaborators Sep 5, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.