-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Community Pipelines] Accelerate inference of AnimateDiff by IPEX on CPU #8643
[Community Pipelines] Accelerate inference of AnimateDiff by IPEX on CPU #8643
Conversation
Hi @patrickvonplaten and @pcuenca, could you pls help to review this PR? since this one uses almost the same optimization methods and is with almost the same code format like the previous one I proposed and merged for SDXL pipeline#6683 . Thanks a lot! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thank you for working on this and your patience! looks mostly good to me although I'm unable to test on a Xeon CPU at the moment. Have a few suggestions, could you please address them?
ckpt = f"animatediff_lightning_{step}step_diffusers.safetensors" | ||
base = "emilianJR/epiCRealism" # Choose to your favorite base model. | ||
|
||
adapter = MotionAdapter().to(device, dtype) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would actually prefer if from_pretrained
could be used consistently but this is no problem either
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
Hi @a-r-r-o-w , thank u so much for the detailed reviewing! So thoughtful and professional that helps improve the code with higher quality.🙂 With the new commit, I almost address all the suggestions you commented, could u pls help to review again? Except for the |
Thank you applying the review comments and the kind words! The |
Hi @a-r-r-o-w , the reason why we did not support |
I see, that is insightful. I do not have a deep understanding of torch.jit.trace (or torchscript in general) due to limited experience, but I've encountered NoneType issues in the past, so it is understandable why it might be hard to support those arguments. Optional type hinting could probably help (?) but it will require many diffusers core code changes I believe, which is out of scope, or fake inputs like you mention but it's okay to not do it here as this is an example pipeline. Thanks for your contribution! |
Thanks a lot for your understanding of this situation, which is quit a headache!😀 I would like to thank u again for reviewing and merging the code! |
…CPU (huggingface#8643) * add animatediff_ipex community pipeline * address the 1st round review comments
…CPU (#8643) * add animatediff_ipex community pipeline * address the 1st round review comments
Hi, this pipeline aims to speed up the inference of AnimateDiff on Intel Xeon CPUs on Linux. It is much alike the previous one I proposed and merged #6683 for SDXL.
By using this optimized pipeline, we can get about 1.5-2.2 times performance acceleration with BFloat16 on fifth generation of Intel Xeon CPUs, code-named [Emerald Rapids].
It is also recommended to run on Pytorch/IPEX v2.0 and above to get the best performance boost.
The main profits are illustrated as below, which are the same with our previous PR:
Below are the tables which show the test results for AnimateDiff-Lightning (a model that is a distilled version of AnimateDiff SD1.5 v2, a lightning-fast text-to-video generation model which uses AnimateDiff pipeline) with 1/2/4/8 steps on Intel® Xeon® Platinum 8582C Processor (60cores/socket, 1socket) w/ data type BF16:
Could u pls help to review? Thanks!