This repository provides code for the paper, Universal Domain Adaptation through Self-Supervision. Please go to our project page to quickly understand the content of the paper or read our paper.
Python 3.6.9, Pytorch 1.2.0, Torch Vision 0.4, Apex. See requirement.txt. We used the nvidia apex library for memory efficient high-speed training.
Office Dataset OfficeHome Dataset VisDA
Prepare dataset in data directory as follows.
./data/amazon/images/ ## Office
./data/Real/ ## OfficeHome
./data/visda_train/ ## VisDA synthetic images
./data/visda_val/ ## VisDA real images
Prepare image list.
unzip txt.zip
File list has to be stored in ./txt.
All training script is stored in script directory.
Example: Open Set Domain Adaptation on Office.
sh script/run_office_obda.sh $gpu-id configs/office-train-config_ODA.yaml
This repository is contributed by Kuniaki Saito. If you consider using this code or its derivatives, please consider citing:
@inproceedings{saito2020dance,
title={Universal Domain Adaptation through Self-Supervision},
author={Saito, Kuniaki and Kim, Donghyun and Sclaroff, Stan and Saenko, Kate},
journal={NeurIPS},
year={2020}
}