You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have trained PADIM on a custom dataset. However, the torch_inference script fails with the error:
Keyerror: model /usr/local/lib/python3.9/dist-packages/anomalib/deploy/inferencers/torch_inferencer.py in load_model(self, path)
83 """
84
---> 85 model = torch.load(path, map_location=self.device)["state_dict"]
86 model.eval()
87 return model.to(self.device)
I tried changing ["model"] to ["state_dict"] but had no improvement.
Please tell us how to fix this.
I need customised inference script so need to write custom infer() function.
Dataset
Folder
Model
PADiM
Steps to reproduce the behavior
Install anomalib on colab
train on custom datset (padim)
perform inference using torch_inference. (lightning_inference works)
OS information
OS information:
OEnv: Colab
Python version: [e.g. 3.8.10]
Anomalib version: [e.g. 0.3.6]
PyTorch version: [e.g. 1.9.0]
CUDA/cuDNN version: [e.g. 11.1]
GPU models and configuration: [e.g. 2x GeForce RTX 3090]
Any other relevant information: [e.g. I'm using a custom dataset]
Expected behavior
It should work like in the previous versions.
Screenshots
No response
Pip/GitHub
pip
What version/branch did you use?
No response
Configuration YAML
dataset:
name: customformat: folderpath: /content/drive/MyDrive/customnormal_dir: good # name of the folder containing normal images.abnormal_dir: bad # name of the folder containing abnormal images.normal_test_dir: good # name of the folder containing normal test images.mask: #optionalextensions: nulltask: classificationtrain_batch_size: 32test_batch_size: 32num_workers: 8image_size: 256# dimensions to which images are resized (mandatory)center_crop: 256# dimensions to which images are center-cropped after resizing (optional)normalization: imagenet # data distribution to which the images will be normalized: [none, imagenet]transform_config:
train: nulleval: nulltest_split_mode: from_dir # options: [from_dir, synthetic]test_split_ratio: 0.2# fraction of train images held out testing (usage depends on test_split_mode)val_split_mode: same_as_test # options: [same_as_test, from_test, synthetic]val_split_ratio: 0.5# fraction of train/test images held out for validation (usage depends on val_split_mode)tiling:
apply: falsetile_size: nullstride: nullremove_border_count: 0use_random_tiling: Falserandom_tile_count: 16model:
name: patchcorebackbone: wide_resnet50_2pre_trained: truelayers:
- layer2
- layer3coreset_sampling_ratio: 0.1num_neighbors: 9normalization_method: min_max # options: [null, min_max, cdf]metrics:
image:
- F1Score
- AUROCpixel:
- F1Score
- AUROCthreshold:
method: adaptive #options: [adaptive, manual]manual_image: nullmanual_pixel: nullvisualization:
show_images: False # show images on the screensave_images: True # save images to the file systemlog_images: True # log images to the available loggers (if any)image_save_path: null # path to which images will be savedmode: full # options: ["full", "simple"]project:
seed: 0path: ./resultslogging:
logger: [] # options: [comet, tensorboard, wandb, csv] or combinations.log_graph: false # Logs the model graph to respective logger.optimization:
export_mode: null # options: onnx, openvino# PL Trainer Args. Don't add extra parameter here.trainer:
enable_checkpointing: truedefault_root_dir: nullgradient_clip_val: 0gradient_clip_algorithm: normnum_nodes: 1devices: 1enable_progress_bar: trueoverfit_batches: 0.0track_grad_norm: -1check_val_every_n_epoch: 1# Don't validate before extracting features.fast_dev_run: falseaccumulate_grad_batches: 1max_epochs: 1min_epochs: nullmax_steps: -1min_steps: nullmax_time: nulllimit_train_batches: 1.0limit_val_batches: 1.0limit_test_batches: 1.0limit_predict_batches: 1.0val_check_interval: 1.0# Don't validate before extracting features.log_every_n_steps: 50accelerator: auto # <"cpu", "gpu", "tpu", "ipu", "hpu", "auto">strategy: nullsync_batchnorm: falseprecision: 32enable_model_summary: truenum_sanity_val_steps: 0profiler: nullbenchmark: falsedeterministic: falsereload_dataloaders_every_n_epochs: 0auto_lr_find: falsereplace_sampler_ddp: truedetect_anomaly: falseauto_scale_batch_size: falseplugins: nullmove_metrics_to_cpu: falsemultiple_trainloader_mode: max_size_cycle
Logs
Keyerror: model
[/usr/local/lib/python3.9/dist-packages/anomalib/deploy/inferencers/torch_inferencer.py](https://localhost:8080/#) in load_model(self, path)
83 """ 84 ---> 85 model = torch.load(path, map_location=self.device)["state_dict"] 86 model.eval() 87 return model.to(self.device)
Code of Conduct
I agree to follow this project's Code of Conduct
The text was updated successfully, but these errors were encountered:
Describe the bug
I have trained PADIM on a custom dataset. However, the torch_inference script fails with the error:
Keyerror: model
/usr/local/lib/python3.9/dist-packages/anomalib/deploy/inferencers/torch_inferencer.py in load_model(self, path)
83 """
84
---> 85 model = torch.load(path, map_location=self.device)["state_dict"]
86 model.eval()
87 return model.to(self.device)
I tried changing ["model"] to ["state_dict"] but had no improvement.
Please tell us how to fix this.
I need customised inference script so need to write custom infer() function.
Dataset
Folder
Model
PADiM
Steps to reproduce the behavior
OS information
OS information:
Expected behavior
It should work like in the previous versions.
Screenshots
No response
Pip/GitHub
pip
What version/branch did you use?
No response
Configuration YAML
Logs
Code of Conduct
The text was updated successfully, but these errors were encountered: