You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I changed my image size in the config file after that my inference step didn't work. I got this warning. Can you help me ,please?
UserWarning: Torchmetrics v0.9 introduced a new argument class property called full_state_update that has
not been set for this class (MinMax). The property determines if update by
default needs access to the full metric state. If this is not the case, significant speedups can be
achieved and we recommend setting this to False.
We provide an checking function from torchmetrics.utilities import check_forward_no_full_state
that can be used to check if the full_state_update=True (old and potential slower behaviour,
default for now) or if full_state_update=False can be used safely.
warnings.warn(*args, **kwargs)
Traceback (most recent call last):
File "/home/zeynep_automi_ai/zeynep_anomalib/anomalib/tools/inference.py", line 170, in
stream()
File "/home/zeynep_automi_ai/zeynep_anomalib/anomalib/tools/inference.py", line 106, in stream
inferencer = TorchInferencer(config=config, model_source=args.weight_path, meta_data_path=args.meta_data)
File "/home/zeynep_automi_ai/zeynep_anomalib/anomalib/env/lib/python3.9/site-packages/anomalib/deploy/inferencers/torch.py", line 55, in init
self.model = self.load_model(model_source)
File "/home/zeynep_automi_ai/zeynep_anomalib/anomalib/env/lib/python3.9/site-packages/anomalib/deploy/inferencers/torch.py", line 86, in load_model
model.load_state_dict(torch.load(path)["state_dict"])
File "/home/zeynep_automi_ai/zeynep_anomalib/anomalib/env/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1497, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for PadimLightning:
size mismatch for model.gaussian.mean: copying a param with shape torch.Size([100, 12500]) from checkpoint, the shape in current model is torch.Size([100, 4096]).
size mismatch for model.gaussian.inv_covariance: copying a param with shape torch.Size([12500, 100, 100]) from checkpoint, the shape in current model is torch.Size([4096, 100, 100]).
logging:
log_images_to: ["local"] # options: [wandb, tensorboard, local]. Make sure you also set logger with using wandb or tensorboard.
logger: [] # options: [tensorboard, wandb, csv] or combinations.
log_graph: false # Logs the model graph to respective logger.
optimization:
openvino:
apply: false
'''
The text was updated successfully, but these errors were encountered:
@ZeynepRuveyda, can you try this with the new lightning inferencer, which seems to be working on my end?
The new lightning inferencer replaces the TorchInferencer. With this new approach, if anything works during training, it would work during the inference as well.
Hello, I changed my image size in the config file after that my inference step didn't work. I got this warning. Can you help me ,please?
UserWarning: Torchmetrics v0.9 introduced a new argument class property called
full_state_update
that hasnot been set for this class (MinMax). The property determines if
update
bydefault needs access to the full metric state. If this is not the case, significant speedups can be
achieved and we recommend setting this to
False
.We provide an checking function
from torchmetrics.utilities import check_forward_no_full_state
that can be used to check if the
full_state_update=True
(old and potential slower behaviour,default for now) or if
full_state_update=False
can be used safely.warnings.warn(*args, **kwargs)
Traceback (most recent call last):
File "/home/zeynep_automi_ai/zeynep_anomalib/anomalib/tools/inference.py", line 170, in
stream()
File "/home/zeynep_automi_ai/zeynep_anomalib/anomalib/tools/inference.py", line 106, in stream
inferencer = TorchInferencer(config=config, model_source=args.weight_path, meta_data_path=args.meta_data)
File "/home/zeynep_automi_ai/zeynep_anomalib/anomalib/env/lib/python3.9/site-packages/anomalib/deploy/inferencers/torch.py", line 55, in init
self.model = self.load_model(model_source)
File "/home/zeynep_automi_ai/zeynep_anomalib/anomalib/env/lib/python3.9/site-packages/anomalib/deploy/inferencers/torch.py", line 86, in load_model
model.load_state_dict(torch.load(path)["state_dict"])
File "/home/zeynep_automi_ai/zeynep_anomalib/anomalib/env/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1497, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for PadimLightning:
size mismatch for model.gaussian.mean: copying a param with shape torch.Size([100, 12500]) from checkpoint, the shape in current model is torch.Size([100, 4096]).
size mismatch for model.gaussian.inv_covariance: copying a param with shape torch.Size([12500, 100, 100]) from checkpoint, the shape in current model is torch.Size([4096, 100, 100]).
Here is my config file
'''
dataset:
name: mvtec #options: [mvtec, btech, folder]
format: mvtec
path: ./datasets/MVTec
category: VSB
task: segmentation
image_size: [200,1000]
train_batch_size: 4
test_batch_size: 1
num_workers: 36
transform_config:
train: null
val: null
create_validation_set: false
tiling:
apply: false
tile_size: null
stride: null
remove_border_count: 0
use_random_tiling: False
random_tile_count: 16
model:
name: padim
backbone: resnet18
layers:
- layer1
- layer2
- layer3
normalization_method: min_max # options: [none, min_max, cdf]
metrics:
image:
- F1Score
- AUROC
pixel:
- F1Score
- AUROC
threshold:
image_default: 17
pixel_default: 33
adaptive: false
project:
seed: 42
path: ./results
logging:
log_images_to: ["local"] # options: [wandb, tensorboard, local]. Make sure you also set logger with using wandb or tensorboard.
logger: [] # options: [tensorboard, wandb, csv] or combinations.
log_graph: false # Logs the model graph to respective logger.
optimization:
openvino:
apply: false
'''
The text was updated successfully, but these errors were encountered: