-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Are the refine_loss_1 and refine_loss_2 defined according to Accumulated Recurrent Learning? #11
Comments
|
I have another question for a long time. Where is the clique partition in your code? I notice that this function is kind of like the clique partition process. But it only help the object localization loss. The global loss i.e., object discovery loss define like that: I think the Thank you for your detailed answer last time. And the questions are a bit long this time. Perhaps others have the same questions. Hoping that we can get your help. Thank you very much! |
Hi, is the loss something wrong? According to the paper, it seems to be different with paper's entropy loss. |
I read Fang Wan's paper and your code.And in your code:
loss = cls_det_loss / 20 + refine_loss_1*0.1 + refine_loss_2*0.1
I think cls_det_loss / 20 is a global entropy models, and the two refine_loss are local entropy models, 0.1 is the regularization weight. And refine_loss_1 and refine_loss_2 are in different object localization branches,which is according the "Accumulated Recurrent Learning" in paper.Are these right?
By the way, I also want to know the bbox_pred whether used in Train mode.I see you explain the mean of "bbox_pred = bbox_pred[:,:80]" in #9. But I'm still a little confused, because I print the bbox_pred when training , and I find the values are all 0. So the bbox_pred is only used in Test mode?
Look forward your reply,thank you.
The text was updated successfully, but these errors were encountered: