-
Notifications
You must be signed in to change notification settings - Fork 437
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Very high memory usage when training Stage 1 #10
Comments
I used 4 A100 with a batch size of 32 for the paper, and the checkpoint I shared was trained with 4 L40 with a batch size of 16. You can either decrease the batch size to 4 (as you only have one GPU), or you can decrease the |
Also for the first stage, you can try mixed precision, and it doesn't seem to decrease the reconstruction quality in my experience. It results in much faster training and half of the RAM use. All you need is |
Thanks! I'll give a try! |
Got another error when using mixed precision:
What seems to be the problem? Thanks! |
@godspirit00 It says the loss is NaN because you are using mixed precision. I think you may have to change the loss here: https://github.com/yl4579/StyleTTS2/blob/main/losses.py#L22 with |
The error persists after changing the line.
It was at Epoch 50. |
Could it be related to the discriminators, or is it a TMA issue? Can you set the TMA epoch to a higher number but set |
I tried searching for |
Sorry I mean you just modify the code to train the discriminator but not the aligner. But if it still doesn’t work, probably you have to do it without mixed precision. It is highly sensitive to batch size unfortunately, so it probably only works for large enough batches (like 16 or 32). |
I have traced this to Line 305 in 3e30081
It does make me wonder, @yl4579 , if this is expected? Were the parameters of the pitch_extractor expected to get updated? |
@stevenhillis I think you are correct. The pitch_extractor actually shouldn't be updated. Does removing this line fix this problem of mixed precision? |
Good deal. Sure does! |
I had the same 'inf' crash problem at epoch 50 with mixed precision and Batch=4. I can confirm that this allows the training to continue past 50. |
@yl4579 Thanks for your great job. |
Hello,
Thanks for the great work.
I'm trying to train a model on my dataset using an A5000 (24GB VRAM). I kept getting OOM at the beginning of Stage 1. I kept reducing batch size, and finally, the training could go on with a batch size of 4.
Is this normal? What hardware were you using?
Thanks!
The text was updated successfully, but these errors were encountered: