-
Notifications
You must be signed in to change notification settings - Fork 95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question: Normal Training Information #26
Comments
Hi, I don't have the results on hand right now. In the paper we report very
good results around epoch 20, so I suspect the results are decent around
epoch 10.
In my experience matting models take a long time to converge for fine
structures like hair and transparency.
…On Thu, Aug 20, 2020 at 8:31 AM zoezhou1999 ***@***.***> wrote:
Hi, recently, I am reproducing your project and experiencing debugging my
reimplementation. Could I ask which epoch of the training does the
evaluation metrics of composite 1k datasets reach the not-bad level to help
me tell if the current training functions normally or needs further
adjustment? Thank you so much!
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#26>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAED5MU2F2CYZESLUVMKUTTSBTGMJANCNFSM4QFXYOVQ>
.
|
Hi, thank you for your reply. I have another question. For l1 and lap loss of alpha computed on the whole image, l1 and lap loss of FG and BG computed with alpha>0 and alpha<1, respectively, under the circumstance where there is no FG reconstruction, composite loss of alpha and FB computed on the whole image both, and exclusion loss modified from the original proposed paper from TensorFlow, do you think it is ok or not? Thank you so much~ |
Hmmm, when training, I found a phenomenon that alpha loss drops to a small stage but not decent compared to FB loss in epoch 1 or 2 and then it seems like the alpha loss is bouncing back and forth and further decreasing is too small to having efficient improvement, but the quite large FB loss, may not quite large, but larger than alpha loss, tends to drop faster. It seems that it makes training alpha matting harder. Is it normal or not? I think it prevents the model from further converging. Thank you so much! |
hey @zoezhou1999 with my personal experience I have seen just normal l1 loss on alpha should be good enough to get initial decent results..also in the paper you can see FB loss doesn't improve the metrics too much so I think you can use all 4 losses on alpha only |
I'm experimenting with this model and found some interesting results. The training code that I wrote was based on the training code of GCA-Matting (suggested by @MarcoForte ). I have some questions that hopefully, will be answered:
I used the losses as suggested, which were: L1 alpha, compositional loss, and laplacian loss. Can someone please give me some suggestions on what should I do? Thank you so much! |
Hi, recently, I am reproducing your project and experiencing debugging my reimplementation. Could I ask which epoch of the training does the evaluation metrics of composite 1k datasets reach the not-bad level to help me tell if the current training functions normally or needs further adjustment? Thank you so much!
The text was updated successfully, but these errors were encountered: