-
Notifications
You must be signed in to change notification settings - Fork 484
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
During Example-guided Image Translation extracting style from domain A works, but from domain B breaks #59
Comments
I tried generating all combinations of "style from domain X, content from Y, decode as Z" and some of them worked well, while others failed miserably; fortunately, the one I needed was among those combinations that worked |
I have an answer to that guidance issue: to perform guided translation you have to train the network accordingly, which means changing the gen.update & discriminator update to perform the translation a-to-b using a guided style.
Same in the dis_update function. |
I encounter ther same problem when I use own dataset for training. Do you have any solution to it Now? |
Hi!
For some reason, after 500k iterations translations between domains with random styles look very good (
test.py <...> --a2b 1
outputs), whereas an attempt to add--style <image_from_the_domain_b>
flag yields a complete mess that looks like translations from very early epochs - it preserves general contours/layout of an image, but is extremely blurry, and does not look anything like the domain b. But, if I add the--style <image_from_the_domain_A>
flag - it yields reasonable images. Theimage_from_the_domain_b
was in the training set for the domain A.Any suggestions about why that might be happening?
I am using the same set of network hyperparameters as in the
edges2handbags
config on 128x128 images. Translations with random styles, as I mentioned, look good.Ben
The text was updated successfully, but these errors were encountered: