You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I have few questions on the code and on the cvpr19 paper where the method was presented
In eq(2) you indicate that the similarity score for a target sample is obtained by taking the maximum out of the |C_s| outputs of G_c. Instead, in the code (file step_1.py, line 122), you compute the similarity score by summing the values of all |C_s| outputs of G_c. Which of the two choices is the correct one? Does it depend on the dataset?
We are trying to reproduce the results you got on Office-Home (table 3). We used the provided code with the same parameters you indicate for the A-W Office case, but we guess the parameters may be different for this dataset so we kindly ask if you can give us indications about the correct parameters for Office-Home and whether they were different for each domain-pair.
Moreover we wonder if you saved also the OS* (as well as ALL and UNK) results for table 3: they are missing from the paper and we did not find a supplementary material. It is a bit difficult to get a fair comparison without looking at all the metrics.
The text was updated successfully, but these errors were encountered:
Hello, I have few questions on the code and on the cvpr19 paper where the method was presented
In eq(2) you indicate that the similarity score for a target sample is obtained by taking the maximum out of the |C_s| outputs of G_c. Instead, in the code (file step_1.py, line 122), you compute the similarity score by summing the values of all |C_s| outputs of G_c. Which of the two choices is the correct one? Does it depend on the dataset?
We are trying to reproduce the results you got on Office-Home (table 3). We used the provided code with the same parameters you indicate for the A-W Office case, but we guess the parameters may be different for this dataset so we kindly ask if you can give us indications about the correct parameters for Office-Home and whether they were different for each domain-pair.
Moreover we wonder if you saved also the OS* (as well as ALL and UNK) results for table 3: they are missing from the paper and we did not find a supplementary material. It is a bit difficult to get a fair comparison without looking at all the metrics.
The text was updated successfully, but these errors were encountered: