Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

paper vs code #3

Open
ttommasi opened this issue Oct 15, 2019 · 0 comments
Open

paper vs code #3

ttommasi opened this issue Oct 15, 2019 · 0 comments

Comments

@ttommasi
Copy link

Hello, I have few questions on the code and on the cvpr19 paper where the method was presented

  1. In eq(2) you indicate that the similarity score for a target sample is obtained by taking the maximum out of the |C_s| outputs of G_c. Instead, in the code (file step_1.py, line 122), you compute the similarity score by summing the values of all |C_s| outputs of G_c. Which of the two choices is the correct one? Does it depend on the dataset?

  2. We are trying to reproduce the results you got on Office-Home (table 3). We used the provided code with the same parameters you indicate for the A-W Office case, but we guess the parameters may be different for this dataset so we kindly ask if you can give us indications about the correct parameters for Office-Home and whether they were different for each domain-pair.

  3. Moreover we wonder if you saved also the OS* (as well as ALL and UNK) results for table 3: they are missing from the paper and we did not find a supplementary material. It is a bit difficult to get a fair comparison without looking at all the metrics.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant