Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When change the image size I got the problem on the inference step #394

Closed
ZeynepRuveyda opened this issue Jun 29, 2022 · 2 comments · Fixed by #390
Closed

When change the image size I got the problem on the inference step #394

ZeynepRuveyda opened this issue Jun 29, 2022 · 2 comments · Fixed by #390
Assignees

Comments

@ZeynepRuveyda
Copy link

ZeynepRuveyda commented Jun 29, 2022

Hello, I changed my image size in the config file after that my inference step didn't work. I got this warning. Can you help me ,please?

UserWarning: Torchmetrics v0.9 introduced a new argument class property called full_state_update that has
not been set for this class (MinMax). The property determines if update by
default needs access to the full metric state. If this is not the case, significant speedups can be
achieved and we recommend setting this to False.
We provide an checking function
from torchmetrics.utilities import check_forward_no_full_state
that can be used to check if the full_state_update=True (old and potential slower behaviour,
default for now) or if full_state_update=False can be used safely.

warnings.warn(*args, **kwargs)
Traceback (most recent call last):
File "/home/zeynep_automi_ai/zeynep_anomalib/anomalib/tools/inference.py", line 170, in
stream()
File "/home/zeynep_automi_ai/zeynep_anomalib/anomalib/tools/inference.py", line 106, in stream
inferencer = TorchInferencer(config=config, model_source=args.weight_path, meta_data_path=args.meta_data)
File "/home/zeynep_automi_ai/zeynep_anomalib/anomalib/env/lib/python3.9/site-packages/anomalib/deploy/inferencers/torch.py", line 55, in init
self.model = self.load_model(model_source)
File "/home/zeynep_automi_ai/zeynep_anomalib/anomalib/env/lib/python3.9/site-packages/anomalib/deploy/inferencers/torch.py", line 86, in load_model
model.load_state_dict(torch.load(path)["state_dict"])
File "/home/zeynep_automi_ai/zeynep_anomalib/anomalib/env/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1497, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for PadimLightning:
size mismatch for model.gaussian.mean: copying a param with shape torch.Size([100, 12500]) from checkpoint, the shape in current model is torch.Size([100, 4096]).
size mismatch for model.gaussian.inv_covariance: copying a param with shape torch.Size([12500, 100, 100]) from checkpoint, the shape in current model is torch.Size([4096, 100, 100]).

Here is my config file

'''
dataset:
name: mvtec #options: [mvtec, btech, folder]
format: mvtec
path: ./datasets/MVTec
category: VSB
task: segmentation
image_size: [200,1000]
train_batch_size: 4
test_batch_size: 1
num_workers: 36
transform_config:
train: null
val: null
create_validation_set: false
tiling:
apply: false
tile_size: null
stride: null
remove_border_count: 0
use_random_tiling: False
random_tile_count: 16

model:
name: padim
backbone: resnet18
layers:
- layer1
- layer2
- layer3
normalization_method: min_max # options: [none, min_max, cdf]

metrics:
image:
- F1Score
- AUROC
pixel:
- F1Score
- AUROC
threshold:
image_default: 17
pixel_default: 33
adaptive: false

project:
seed: 42
path: ./results

logging:
log_images_to: ["local"] # options: [wandb, tensorboard, local]. Make sure you also set logger with using wandb or tensorboard.
logger: [] # options: [tensorboard, wandb, csv] or combinations.
log_graph: false # Logs the model graph to respective logger.

optimization:
openvino:
apply: false
'''

@samet-akcay samet-akcay added Bug Something isn't working Model Inference labels Jul 1, 2022
@samet-akcay samet-akcay self-assigned this Jul 12, 2022
@samet-akcay
Copy link
Contributor

@ZeynepRuveyda, can you try this with the new lightning inferencer, which seems to be working on my end?

The new lightning inferencer replaces the TorchInferencer. With this new approach, if anything works during training, it would work during the inference as well.

@samet-akcay samet-akcay removed the Bug Something isn't working label Jul 12, 2022
@samet-akcay samet-akcay linked a pull request Jul 12, 2022 that will close this issue
12 tasks
@samet-akcay
Copy link
Contributor

Closing this due to inactivity. Feel free to re-open in case of encountering the same issue again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants