-
Notifications
You must be signed in to change notification settings - Fork 709
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🐞 Add device flag #601
🐞 Add device flag #601
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only a single comment...
@ashwinvaidya17, #600 points out that |
@@ -84,7 +104,7 @@ def load_model(self, path: Union[str, Path]) -> AnomalyModule: | |||
model = get_model(self.config) | |||
model.load_state_dict(torch.load(path)["state_dict"]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't we need to pass the map_location here? It might fail otherwise if we load a gpu model on cpu.
https://pytorch.org/tutorials/recipes/recipes/save_load_across_devices.html
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point
I am now getting the same FPS from torch_inference script (without visualization) and benchmarking script. I guess this is ready to be merged. |
Description
Provide a summary of the modification as well as the issue that has been resolved. List any dependencies that this modification necessitates.
Fixes ONNX inference and TENSORRT optimisation #600
Changes
Checklist