Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Newbie - Block size limitation pytorch - Intel ARC a770 16GB #11

Closed
kodiconnect opened this issue Jan 26, 2025 · 3 comments
Closed

Newbie - Block size limitation pytorch - Intel ARC a770 16GB #11

kodiconnect opened this issue Jan 26, 2025 · 3 comments

Comments

@kodiconnect
Copy link

First, thank you for building this!

I'm getting this error: "Current platform can NOT allocate memory block with size larger than 4GB! Tried to allocate 7.03 GiB (GPU 0; 15.91 GiB total capacity; 2.29 GiB already allocated; 2.50 GiB reserved in total by PyTorch)" I suspect it can be resolved with something like: "export PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512"

Has anyone run into this? If so, what was your solution other than accepting the limit?

@kodiconnect
Copy link
Author

Well, ignore me as I answered my own question in the question... Setting this as an env variable in the docker file corrected the issue:

Set environment variable inside the container

ENV PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512

@simonlui
Copy link
Owner

That won't solve the core issue, I refer you to intel/intel-extension-for-pytorch#325 where the issue is still being discussed. If you have Battlemage and above, it will work without hassle. But on Alchemist and similar 1st generation Xe architecture, there are changes that make it work technically but practically, for ComfyUI and video/image generation purposes, there are still issues. No real ETA for when they fix it and if they do, it is going to land inside a Pytorch nightly build first before an IPEX build.

@kodiconnect
Copy link
Author

kodiconnect commented Jan 26, 2025 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants