-
Notifications
You must be signed in to change notification settings - Fork 98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cosine Distance #32
Comments
Hi, thanks for your encouraging words. Sorry I am not completely sure which part is confusing to you. The cosine distance between two normalized vectors v1 and v2 is defined as 1 - dot_product(v1, v2). f is the trained encoder, and will thus be different for each domain. From an image, you can get the embedding by taking the output of f, flatten the vector to make it 1 dimensional, then normalize it so that it's length is 1. You do the same thing to the other input image, and you can compute cosine distance using the equation above. Hopefully that answers your question |
Thanks for reply. I am just confused about that did you calculate the cosine distance between the real images from two domains or between the generated images? |
Between the real images. |
Thank you so much. |
Hi, thank you for such a nice piece of work.
In Section 4.4 in your paper, it is mentioned that you have calculated the cosine distance between the flattened embedding. If we have 4 images. i.e. f(x1), f(x1'), and f(x2), f(x2') where f(x1) and f(x1') are the real images and generated images in domain X1 and f(x2) and f(x2') are the real images and generated images in domain X2. Can you please suggest did you calculate the cosine distance between f(x1) and f(x1') or between f(x1) and f(x2') or between f(x1) and f(x2) ?
Thanks in advance.
The text was updated successfully, but these errors were encountered: