-
Notifications
You must be signed in to change notification settings - Fork 709
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to run Torch Inferencer? #1076
Comments
Hi everyone, @ashwinvaidya17, @samet-akcay
Instead of loading the 'state_dict' key, it's loading 'model' key which does not exist. Thanks! |
@samet-akcay sorry for disturb but any plan for fixing this? |
I am facing the same issue. And I can't find a workaround. I need this because I am trying to get the pred_mask to do my own processing on it. Anyone aware of another way to get the predicted masks? Thanks |
Hey everyone, I'll have a look at the issue |
I have looked into this. This is actually not a bug. In order to use the logging:
...
optimization:
export_mode: torch # options: torch, onnx, openvino
# PL Trainer Args. Don't add extra parameter here.
trainer:
... So if you set the Depending on your reply, I will convert this issue to a Q&A in Discussions. |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
Describe the bug
Hello @ashwinvaidya17 and @samet-akcay, I noticed that
./tools/inference/torch_inference.py
appears changed in the two versions as shown.When configuring the parser
[ parser=ArgumentParser() ]
, the version of @ashwinvaidya17 is passed into config, while @samet-akcay, the latest version (and the one I'm using now), is not passed into config。And there is a difference in the corresponding function
load_model
that loads the modelBecause I am using the latest version of anomalib (samet-akcay),When I use pytorch inference, it comes up with an error. The specific place where the error occurred I have located to be
Its specific error message is:
KeyError: 'model'
So I used the following code to debug:
I found that this is indeed a mistake, there is no "model"
So I have the following questions:
1: Is this
KeyError: 'model'
problem a bug that does exist, as I see it specifically mentioned in the update file (#600, #601). How should I go about fixing it.2: I'm actually encapsulating my own program to use OnnxRuntime to implement the reasoning. I noticed that openvino's confidence score is calculated after getting a value from
predictions=predictions[self.output_blob]
. Do you have any suggestions on how I should get this similar result in OnnxRuntime by its inference?I am looking forward to your reply, thank you very much!
Dataset
Folder
Model
PatchCore
Steps to reproduce the behavior
torch_inference.py
OS information
pytorch_1.12.1
CUDA_11.3
Expected behavior
Screenshots
No response
Pip/GitHub
pip
What version/branch did you use?
No response
Configuration YAML
Logs
Code of Conduct
The text was updated successfully, but these errors were encountered: