- UNSUPERVISED CROSS-DOMAIN IMAGE GENERATION, ICLR 2017
- install pytorch and pytorch.vision
-
Train function f in source domain(SVHN), first
-
CUDA_VISIBLE_DEVICES=x python main_recog.py --dataset svhn --dataroot /path/to/svhn/extra/ --valDataroot /path/to/svhn/test/ --exp recog_svhn
-
Datasets will be downloaded automatically in the paths you specified
-
Resulting model is saved in recog_svhn directory named like netE_epoch_xx.pth
-
You will reach at ~95.xx % of accuracy.
-
And then, train DTN
-
CUDA_VISIBLE_DEVICES=x python main_dtnetgan.py --datasetA svhn --datarootA /path/to/svhn/extra/ --valDatarootA /path/to/svhn/test/ --datasetB mnist --datarootB /path/to/mnist/train/ --valDatarootB /path/to/test/ --netE /path/to/previously/trained/model/netE_epoch_xx.pth --exp S2M --crossentropy
- Style transfer as a specific case of the domain transfer
- not yet implemented
- We used crossentropy loss computing L_CONST (i.e. Eq.5 in the paper)
- This idea was borrowed from Plug-and-Play Generative Network
- You can easily change direction of domain transfer such that MNIST to SVHN
CUDA_VISIBLE_DEVICES=x python main_recog.py --dataset mnist --dataroot /path/to/mnist/train/ --valDataroot /path/to/mnist/test/ --exp recog_mnist
CUDA_VISIBLE_DEVICES=x python main_dtnetgan.py --datasetA mnist --datarootA /path/to/mnist/train/ --valDatarootA /path/to/mnist/test/ --datasetB svhn --datarootB /path/to/svhn/extra/ --valDatarootB /path/to/svhn/test/ --netE /path/to/pretrained/model/netE_epoch_xx.pth --exp M2S