Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to run Torch Inferencer? #1076

Closed
1 task done
laogonggong847 opened this issue May 6, 2023 · 5 comments
Closed
1 task done

How to run Torch Inferencer? #1076

laogonggong847 opened this issue May 6, 2023 · 5 comments
Assignees

Comments

@laogonggong847
Copy link

Describe the bug

Hello @ashwinvaidya17 and @samet-akcay, I noticed that ./tools/inference/torch_inference.py appears changed in the two versions as shown.
2971683342964_ pic

When configuring the parser[ parser=ArgumentParser() ], the version of @ashwinvaidya17 is passed into config, while @samet-akcay, the latest version (and the one I'm using now), is not passed into config。
3011683349959_ pic

And there is a difference in the corresponding function load_model that loads the model

2991683343203_ pic

3001683343302_ pic

Because I am using the latest version of anomalib (samet-akcay),When I use pytorch inference, it comes up with an error. The specific place where the error occurred I have located to be

model = torch.load(path, map_location=self.device)["model"]

Its specific error message is:
KeyError: 'model'

So I used the following code to debug:
1CFF5722BAF8A8F9AB3C49903EC5289B

I found that this is indeed a mistake, there is no "model"


So I have the following questions:

1: Is this KeyError: 'model' problem a bug that does exist, as I see it specifically mentioned in the update file (#600, #601). How should I go about fixing it.

2: I'm actually encapsulating my own program to use OnnxRuntime to implement the reasoning. I noticed that openvino's confidence score is calculated after getting a value from predictions=predictions[self.output_blob] . Do you have any suggestions on how I should get this similar result in OnnxRuntime by its inference?

I am looking forward to your reply, thank you very much!

Dataset

Folder

Model

PatchCore

Steps to reproduce the behavior

torch_inference.py

OS information

pytorch_1.12.1
CUDA_11.3

Expected behavior

Screenshots

No response

Pip/GitHub

pip

What version/branch did you use?

No response

Configuration YAML

-

Logs

-

Code of Conduct

  • I agree to follow this project's Code of Conduct
@glucasol
Copy link

Hi everyone, @ashwinvaidya17, @samet-akcay
Any update here? I am getting the same errors here: KeyError: 'model'.
Now the model is being loaded like this:

model = torch.load(path, map_location=self.device)["model"]

Instead of loading the 'state_dict' key, it's loading 'model' key which does not exist.

Thanks!

@nguyenanhtuan1008
Copy link

@samet-akcay sorry for disturb but any plan for fixing this?

@ironllamagirl
Copy link

ironllamagirl commented Jun 8, 2023

I am facing the same issue. And I can't find a workaround.

I need this because I am trying to get the pred_mask to do my own processing on it. Anyone aware of another way to get the predicted masks? Thanks

@samet-akcay samet-akcay self-assigned this Jun 8, 2023
@samet-akcay
Copy link
Contributor

Hey everyone, I'll have a look at the issue

@samet-akcay samet-akcay added the Bug Something isn't working label Jun 8, 2023
@samet-akcay samet-akcay mentioned this issue Jun 9, 2023
13 tasks
@samet-akcay samet-akcay removed the Bug Something isn't working label Jun 9, 2023
@samet-akcay samet-akcay changed the title [Bug]: Version change causes torch_inference error How to run Torch Inferencer? Jun 9, 2023
@samet-akcay
Copy link
Contributor

samet-akcay commented Jun 9, 2023

I have looked into this. This is actually not a bug. In order to use the torch_inference.py one would need to export the model to torch in the config file. For example, the corresponding section in a config file, would look something like this:

logging:
  ...

optimization:
  export_mode: torch # options: torch, onnx, openvino

# PL Trainer Args. Don't add extra parameter here.
trainer:
  ...

So if you set the export_mode to torch, the current version should work. Let me know if you still have issues.

Depending on your reply, I will convert this issue to a Q&A in Discussions.

@openvinotoolkit openvinotoolkit locked and limited conversation to collaborators Jun 9, 2023
@samet-akcay samet-akcay converted this issue into discussion #1122 Jun 9, 2023

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants