-
Notifications
You must be signed in to change notification settings - Fork 709
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
question about AdaptiveThreshold, possible bug? #457
Comments
Since PrecisionRecallCurve works on all data/is computed on the complete data by default in torchmetrics, you will have to hoid all your datapoints in memory. For large datasets, you can look at the binned PR-Curve, implemented for constant-memory requirement by torchmetrics. |
I think my post wasn't clear and I used the term "step" incorrectly. |
Ahh I see. At a quick glance it seems as if the Can you try to add:
to the class and see if that fixes the bug/problem? Class can be found here. overall class should look as follows:
Note: Same will be necessary for |
Hi, |
Did you install anomalib with the If so, please provide a MWE so that the bug can be localized. |
Shouldn't this one be closed? |
I believe that the original issue is not yet resolved. While the |
I'm running anomalib on a custom dataset (around 20k samples) and I've noticed costant crashes after a few epochs/validation steps (DefaultCPUAllocator: can't allocate memory...) almost independently of the modeI I choose.
It seem to be related to AdaptiveThreshold.
In it PrecisionRecallCurve is used, and after every validation step new data is added to it, old data is never deleted.
Why?
Shouldn't data related to the previous validation steps be deleted?
The text was updated successfully, but these errors were encountered: