-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The inference script is not generating a complete caption. #40
Comments
Thank you for reporting the issue. Have you figured it out? |
Unfortunately, not yet, I have checked the config file and everything looks ok to me (beam search, max_length, ..etc). I have tested the code with only one GPU (A100) and (Colab GPU), and I'm not sure if this will affect the inference. |
Thanks for further information. It doesn't affect the inference as I also used 1 GPU. I haven't figured out the issue. Let me check this in details. |
Same issue here. There is something to do with not using model.eval() in the inference code. So the captions generated are not deterministic. Every time I run the inference_caption.py, it gives me different captions. |
Hello, output |
Hi, thank you for sharing this great work.
I’m trying to reproduce the paper result on the 5k Karpathy test split test set using the inference script, but I’m getting a lower scores:
Bleu_1: 0.810
Bleu_2: 0.655
Bleu_3: 0.510
Bleu_4: 0.388
METEOR: 0.295
ROUGE_L: 0.587
CIDEr: 1.333
SPICE: 0.230
And after some digging, the caption is not fully generated,
I managed to duplicate the problem in Colab as well.
https://colab.research.google.com/drive/1BvtscubSujlxOFhOchVGNB79KkKYoMiH?usp=sharing
The text was updated successfully, but these errors were encountered: