You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I am currently studying the VideoGPT and have some doubts on VQ-VAE losses.
Where is VQ loss?
In the original paper, there're 3 loss, a reconstruction loss, a VQ loss (bring embedding to encoder_output.detach()). and a commitment loss (bring encoder_output to the embdding.detach()).
I could only find the commitment loss implemented, but not the VQ loss.
Any information to resolve the situation will be highly appreciated
The text was updated successfully, but these errors were encountered:
yiqiwang8177
changed the title
VQ-VAE commitment loss problem and VQ-loss is missing?
VQ-VAE VQ-loss is missing? only find reconstruct, and commitment loss
Oct 9, 2023
That is because the codebook is being updated using exponential moving average (EMA), not by the gradient of the codebook loss (see line 176 of vqvae.py).
It's shown in this paper that EMA-based updation is equivalent to updating the codebook using SGD over codebook loss.
Hi,
I am currently studying the VideoGPT and have some doubts on VQ-VAE losses.
Where is VQ loss?
Any information to resolve the situation will be highly appreciated
The text was updated successfully, but these errors were encountered: