Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Inpainting doesn't work with Batch cond/uncond optimization on #13382

Open
1 task done
younyokel opened this issue Sep 24, 2023 · 4 comments
Open
1 task done
Labels
bug-report Report of a bug, yet to be confirmed

Comments

@younyokel
Copy link

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

The optimization setting called Batch cond/uncond makes inpainting unusable.

Steps to reproduce the problem

  1. Go to Settings - Optimizations
  2. Turn on Batch cond/uncond, Save
  3. Use inpainting

What should have happened?

inpaint

Sysinfo

sysinfo-2023-09-24-22-29.txt

What browsers do you use to access the UI ?

Microsoft Edge

Console logs

Traceback (most recent call last):
      File "D:\Program Files\Stable Diffusion WebUI\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "D:\Program Files\Stable Diffusion WebUI\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "D:\Program Files\Stable Diffusion WebUI\modules\img2img.py", line 220, in img2img
        processed = process_images(p)
      File "D:\Program Files\Stable Diffusion WebUI\modules\processing.py", line 734, in process_images
        res = process_images_inner(p)
      File "D:\Program Files\Stable Diffusion WebUI\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "D:\Program Files\Stable Diffusion WebUI\modules\processing.py", line 869, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "D:\Program Files\Stable Diffusion WebUI\modules\processing.py", line 1530, in sample
        samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
      File "D:\Program Files\Stable Diffusion WebUI\modules\sd_samplers_kdiffusion.py", line 188, in sample_img2img
        samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "D:\Program Files\Stable Diffusion WebUI\modules\sd_samplers_common.py", line 261, in launch_sampling
        return func()
      File "D:\Program Files\Stable Diffusion WebUI\modules\sd_samplers_kdiffusion.py", line 188, in <lambda>
        samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "D:\Program Files\Stable Diffusion WebUI\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "D:\Program Files\Stable Diffusion WebUI\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "D:\Program Files\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\Program Files\Stable Diffusion WebUI\modules\sd_samplers_cfg_denoiser.py", line 169, in forward
        x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
      File "D:\Program Files\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\Program Files\Stable Diffusion WebUI\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "D:\Program Files\Stable Diffusion WebUI\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "D:\Program Files\Stable Diffusion WebUI\modules\sd_hijack_utils.py", line 17, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "D:\Program Files\Stable Diffusion WebUI\modules\sd_hijack_utils.py", line 28, in __call__
        return self.__orig_func(*args, **kwargs)
      File "D:\Program Files\Stable Diffusion WebUI\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
        x_recon = self.model(x_noisy, t, **cond)
      File "D:\Program Files\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\Program Files\Stable Diffusion WebUI\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1337, in forward
        xc = torch.cat([x] + c_concat, dim=1)
    RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 1 but got size 2 for tensor number 1 in the list.

Additional information

No response

@younyokel younyokel added the bug-report Report of a bug, yet to be confirmed label Sep 24, 2023
@mariaWitch
Copy link

mariaWitch commented Oct 9, 2023

I have confirmed this issue on a different setup. I have also confirmed that disabling Batch cond/uncond fixes this. I tested this on the most recent Dev commit/version of v1.6.0-209-g7d60076b

This could be related to #12227 . Given that seems to be the only PR that touched the hijacking mechanics for Inpainting Models. Someone should test this in v1.5.x with the --always-batch-cond-uncond flag passed on start up, as this wasn't enabled by default in previous versions of WebUI. And as such, this could be a problem which has existed for a significant period of time, and just hasn't been caught until now due to the option not being on by default.

Given past Bug reports, this is definitely not confined to just Batch Cond/uncond.
See: #10802

@mariaWitch
Copy link

Basically to workaround this issue, you can do either of the following: either set NGMS to zero, or disable batch cond/uncond

@mariaWitch
Copy link

@w-e-w @AUTOMATIC1111
Given the breaking nature of this issue, could one of you two take a closer look at this? This seems to have been an issue since NGMS was introduced, but wasn't really ever discovered since Batch cond/uncond was turned off by default until v1.6.0.
It's possible the issue was introduced with #9177 but no one ever noticed.

@younyokel
Copy link
Author

Basically to workaround this issue, you can do either of the following: either set NGMS to zero, or disable batch cond/uncond

Seems like it, thanks for pointing it out.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-report Report of a bug, yet to be confirmed
Projects
None yet
Development

No branches or pull requests

2 participants