-
Notifications
You must be signed in to change notification settings - Fork 484
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
question about trainning #23
Comments
From my understanding, the domain invariant perceptual loss should only be used for accelerating the training process for input >=512 X 512. My guess is that images of size 256 X 256 is to small. Check page 8 of their paper which give more details. Please the authors, can confirm my guess ? Thanks, |
@OValery16 Yes, we find that the domain invariant perceptual loss is useful for large size images. For image resolution of 256x256, we do not use domain invariant perceptual loss. |
@wanghxcis Could you send me some example images and training results? |
@mingyuliutw I read your paper and I got a bit confused. In which case do you use explicit style augmented cycle consistency loss ? |
@mingyuliutw Thanks, I will get rid off the perceptual loss, and try again. My trainning parameters are as follows: max_iter: 1000000 # maximum number of training iterations input_dim_a: 3 # number of image channels [1/3] |
Hi, does the domain-invariant perceptual loss affect the image quality for size 256*256? |
Hi,did you solve this problem? I also have the same confusion. |
您好,我也遇到了相同的问题,请问您是怎么解决的? |
您好,我也是将图像调整为256256,但在这种情况下如果vgg_w不为0,则会使损失为Nan,请问您有什么建议吗?感谢! |
您好,我正在尝试复现MUNIT,但遇到了 for (src, dst) in zip(vgglua.parameters()[0], vgg.parameters()): TypeError: 'NoneType' object is not callable的错误,我尝试进行了修改但生成的vgg16.weight不起作用,vgg_w参数为nan,请问您是否遇到过?或者可以发我一下您models文件夹下的vgg16.weight给我一下吗?万分感谢 |
Hi guys,
I collect some data image about winter and summer, and resize to 256*256. When I train these image, I find the vgg_w parameter is hard to tune. If this value is large, the output image quality is OK, but I can only see little translation effect, the output image is almost the same as input. However, when this value is small, the output image is blurry after 1,000,000 iterations. What should I do, enlarging mlp_dim or somthing else?
The text was updated successfully, but these errors were encountered: