-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OOM issues #5
Comments
Hi, thanks for your interest in our work! |
Hi, |
Hi, I am glad to hear that this work is of interest to you! I have finally figured out how to run inference in less than 32 GB. The key is to perform full precision inference in Stage 1 i.e. VQGAN and mixed precision inference in Stage 2 i.e. the autoregressive decoder. The codebase has been updated with the changes and you should be able to infer using NVIDIA V100 now. Feel free to report in this thread if you face any further problems. Thanks! |
I am trying to train the code on A100 (80GB VRAM) and it keeps failing due to OOM.
I haven't changed your code. However, giving me the following OOM error.
I am not sure why this is happening. I hope for suggestions |
Hey, thanks for interesting work.
I was trying to run train_story.sh, but was running into memory issues when running on NVIDIA V100. Would you be able to share the configurations that you ran it on, and if there are ways to decrease the GPU memory requirements?
The text was updated successfully, but these errors were encountered: