-
Notifications
You must be signed in to change notification settings - Fork 484
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Doubt with AdaIN ? #48
Comments
The difference is between AdaIN and IN is that the affine parameters in AdaIN are data-adaptive (different test data have different affine parameters), while the affine parameters are fixed (fixed after fitting the training data). In the case of MUNIT, the affine parameters of AdaIN is coming from the style code via the decoding operation of MLP. |
The MLP need train? |
Thank you very much. I am also puzzled about the learned affine parameters. Is it possible to use the AdaInGen proposed in MUNIT to replace the network in "arbitrarry style transfer in real-time" , namely, the content image is fed into the content encoder , the style image is fed into the style encoder to learn the mean and var, then through the decoder the stylized image is synthesised with the content of content image and style of the style image. Dose it still achieve the arbitrary style transfer? |
Thanks for your beautiful work.
I have a question after reading the paper.
The AdaIN is used here and its' affine parameters are fixed (computed from input y ) when it was proposed in Huang's paper. But here it is learned from a MLP module.
However the affine parameters of Instance Norm are also learned from data during the training, so what the difference between IN and AdaIN in your paper? Will it be ok if we replace the AdaIN with IN and do not use the MLP ?
The text was updated successfully, but these errors were encountered: