-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
strange results #8
Comments
Hi, I suppose you referring to the accuracy scores being 0 throughout the training process, while training/validation perplexity seems to be going down as expected. Have you tried looking into the code to see what might be happening? That did not happen with me, I believe. It could help if you could inform what are the exact versions of the libraries you're using (e.g. python, pytorch, etc.). Also, have you looked at the outputs generated by the model at the end of the training process, for instance with |
I also encountered the same problem |
Hi @iacercalixto , I encounted the same problem and tried your method. Below is what I got.
The ppl looks good to me. But i have no idea what the average score is and whether it is reasonable or not. Do you have any insight on this result? Best, |
Hi @LakeCarrot , what you are mentioning is indeed not a problem. Also, I don't think it is related to the topic created by @Eurus-Holmes (just for completeness' sake). The problem raised earlier is that the accuracy being printed in the training procedure is "0.00". The perplexity and average negative log-likelihood (score) in your example both look correct. In order to know whether the translations make sense, check the generated output files. By default that's "pred.txt", unsless you called translate_mm.py with a parameter to set the output file to another file name. Best, |
The text was updated successfully, but these errors were encountered: