How to run Torch Inferencer? #1122
Replies: 16 comments 7 replies
-
Hi everyone, @ashwinvaidya17, @samet-akcay
Instead of loading the 'state_dict' key, it's loading 'model' key which does not exist. Thanks! |
Beta Was this translation helpful? Give feedback.
-
@samet-akcay sorry for disturb but any plan for fixing this? |
Beta Was this translation helpful? Give feedback.
-
I am facing the same issue. And I can't find a workaround. I need this because I am trying to get the pred_mask to do my own processing on it. Anyone aware of another way to get the predicted masks? Thanks |
Beta Was this translation helpful? Give feedback.
-
Hey everyone, I'll have a look at the issue |
Beta Was this translation helpful? Give feedback.
-
I have looked into this. This is actually not a bug. In order to use the logging:
...
optimization:
export_mode: torch # options: torch, onnx, openvino
# PL Trainer Args. Don't add extra parameter here.
trainer:
... So if you set the Depending on your reply, I will convert this issue to a Q&A in Discussions. |
Beta Was this translation helpful? Give feedback.
-
hello @samet-akcay @blaz-r @alexriedel1 ,
I'm sorry to bother you all, but I have two questions for you. 1:anomalib is trained all using pytorch lighting, if I want to utilize pytorch for training, do I just go to the torch_model.py corresponding to all models? 2:patchcore doesn't have a loss function, if I want to use pytorch to build the corresponding training method, how should I do it, are there any existing tutorials or do you guys have any good suggestions? Thank you very much for your reply. |
Beta Was this translation helpful? Give feedback.
-
hello @blaz-r , hello @samet-akcay , hello @ashwinvaidya17 I have very little exposure to pytorch lighting, which makes it difficult for me to understand the code related to pytorch lighting. The training code of anomalib seems to utilize the Trainer implementation of pytorch lighting, which makes it difficult for me to have a clear understanding of it even if I have looked at the corresponding source code. For these reasons, and in order to better learn and understand anomalib, I would like to rewrite the anomalib training code using pytorch. My core goal is essentially to use pytorch to rewrite My question is, if I just use pytorch to rewrite the training code. How do I go about the three most basic parts of the model, the loss function, and the optimizer, and please and allow me to formulate these questions more specifically: 1: Problems with anomalib models for pytorch training: Take patchcore as an example, we know that when pytorch training we all design a model of our own as needed, and for anomalib, can I just use 2:Loss function problem during pytorch training: 2.1: I know that similar to patchcore and padim does not have loss function. So when rewriting the training code using pytorch, how should I design for this kind of method which does not have a loss function, I haven't come across a similar situation in my previous studies. 2.2: For cfa, which has a loss function, do I just need to utilize the loss.py in the corresponding folder to implement its loss function. Take cfa for example, 3:Optimizer problem during pytorch training: 3.1: When designing the training code for anomalib using pytorch, the optimizer also involves the problem of two different models represented by patchcore and cfa, how should I deal with it. 3.2: In the "config.yaml" file, 'trainer' corresponds to many parameters in pytorch lighting. Do I need to keep these training parameters consistent when I train with pytorch? If not, will this cause a big difference in performance and other aspects between the model I train and the model I train with pytorch lighting. Very much looking forward to your reply, thank you again! |
Beta Was this translation helpful? Give feedback.
-
hello @samet-akcay @blaz-r
I scrutinized the source code when the dataset was divided , and I found that the images in |
Beta Was this translation helpful? Give feedback.
-
Hello @laogonggong847. Abnormal images are only used in testing stage. The models are usually trained only on good data, due to nature of anomaly detection where we usually only have good samples at time of training. |
Beta Was this translation helpful? Give feedback.
-
hello @blaz-r :) I have two questions: |
Beta Was this translation helpful? Give feedback.
-
@laogonggong847 Patchcore has no optimizable parameters, it just collects features extracted by the chosen backbone model. It saves these features in the memory_bank. I guess you didn't correctly rewrite the training script and thats whats causing these issues |
Beta Was this translation helpful? Give feedback.
-
hello @alexriedel1 I think my rewrite of the training section should be error free for now, as the model I trained is identical except for the small portion shown in the image and the inconsistency at the beginning, which I think may be a result of my incomplete code. |
Beta Was this translation helpful? Give feedback.
-
The min and max values are calculated during validation. In anomalib you find the process for min-max-normalization here https://github.com/openvinotoolkit/anomalib/blob/main/src/anomalib/utils/callbacks/min_max_normalization.py and here https://github.com/openvinotoolkit/anomalib/blob/main/src/anomalib/post_processing/normalization/min_max.py Also check the base model on how to get the the thresholds: https://github.com/openvinotoolkit/anomalib/blob/main/src/anomalib/models/components/base/anomaly_module.py#L158 However you don't necessarily need the min max and thresholds for a working model.. |
Beta Was this translation helpful? Give feedback.
-
Thank you very much @alexriedel1 2: In the actual Patchcore pytorch model code, I don't see any relevant results for the four thresholds, are the TAs simply extra information that is inserted. Should I convert this ckpt model to onnx to check it further to check if they have the same structure. Thanks again for your patience in helping and explaining |
Beta Was this translation helpful? Give feedback.
-
1 It is the coreset of all features extracted during training 2 |
Beta Was this translation helpful? Give feedback.
-
hello @alexriedel1 , |
Beta Was this translation helpful? Give feedback.
-
Describe the bug
Hello @ashwinvaidya17 and @samet-akcay, I noticed that
./tools/inference/torch_inference.py
appears changed in the two versions as shown.When configuring the parser
[ parser=ArgumentParser() ]
, the version of @ashwinvaidya17 is passed into config, while @samet-akcay, the latest version (and the one I'm using now), is not passed into config。And there is a difference in the corresponding function
load_model
that loads the modelBecause I am using the latest version of anomalib (samet-akcay),When I use pytorch inference, it comes up with an error. The specific place where the error occurred I have located to be
Its specific error message is:
KeyError: 'model'
So I used the following code to debug:
I found that this is indeed a mistake, there is no "model"
So I have the following questions:
1: Is this
KeyError: 'model'
problem a bug that does exist, as I see it specifically mentioned in the update file (#600, #601). How should I go about fixing it.2: I'm actually encapsulating my own program to use OnnxRuntime to implement the reasoning. I noticed that openvino's confidence score is calculated after getting a value from
predictions=predictions[self.output_blob]
. Do you have any suggestions on how I should get this similar result in OnnxRuntime by its inference?I am looking forward to your reply, thank you very much!
Dataset
Folder
Model
PatchCore
Steps to reproduce the behavior
torch_inference.py
OS information
pytorch_1.12.1
CUDA_11.3
Expected behavior
Screenshots
No response
Pip/GitHub
pip
What version/branch did you use?
No response
Configuration YAML
Logs
Code of Conduct
Beta Was this translation helpful? Give feedback.
All reactions