-
Notifications
You must be signed in to change notification settings - Fork 484
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Worse translation results after updating to pytorch 0.4.0 #27
Comments
@HsinYingLee I also have the similar problem, how many images do you have in your training set(trainA and trainB)? |
@milankolkata According to the the recent commit, the degraded issue is due to setting |
@HsinYingLee Thanks for mentioning it. I am trying the new code. How many images did you use when training the model? |
@milankolkata In the commit 972e42, the custom layernorm only supports one image per batch. With the new commit 4c21350, it supports multiple images per batch. However, the time required for each iteration would increase about 4 times when you use a batch size greater than 1. (This is due to the change of the way pytorch implements BTW, I am still confirming if the performance is the same for pytorch 0.3 and 0.4. |
@mingyuliutw Thanks! I will try munit_pytorch0.3 and hope it could speed the training up. Also, expect the code supports the multi-GPUs in the future. |
@HsinYingLee and @milankolkata I think the code is now working properly in pytorch 0.4 now. We spent sometime in the past few days playing with this |
Hi all,
After updating to pytorch 0.4.0, I try to train MUNIT on the summer2winter_yosemite dataset from Cyclegan (https://github.com/junyanz/CycleGAN) yet the results are of ill quality. I tried the initial released version of MUNIT before and it worked perfectly well. Is anyone facing the same issue after switching to pytorch 0.4.0?
Here is the snapshot after 150000 iterations:

I use the default configuration.
Thank you.
The text was updated successfully, but these errors were encountered: