Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Torch inferencer error #1047

Closed
1 task done
shrinand1996 opened this issue Apr 26, 2023 · 3 comments
Closed
1 task done

[Bug]: Torch inferencer error #1047

shrinand1996 opened this issue Apr 26, 2023 · 3 comments

Comments

@shrinand1996
Copy link

Describe the bug

I have trained PADIM on a custom dataset. However, the torch_inference script fails with the error:
Keyerror: model
/usr/local/lib/python3.9/dist-packages/anomalib/deploy/inferencers/torch_inferencer.py in load_model(self, path)
83 """
84
---> 85 model = torch.load(path, map_location=self.device)["state_dict"]
86 model.eval()
87 return model.to(self.device)

I tried changing ["model"] to ["state_dict"] but had no improvement.
Please tell us how to fix this.
I need customised inference script so need to write custom infer() function.

Dataset

Folder

Model

PADiM

Steps to reproduce the behavior

  1. Install anomalib on colab
  2. train on custom datset (padim)
  3. perform inference using torch_inference. (lightning_inference works)

OS information

OS information:

  • OEnv: Colab
  • Python version: [e.g. 3.8.10]
  • Anomalib version: [e.g. 0.3.6]
  • PyTorch version: [e.g. 1.9.0]
  • CUDA/cuDNN version: [e.g. 11.1]
  • GPU models and configuration: [e.g. 2x GeForce RTX 3090]
  • Any other relevant information: [e.g. I'm using a custom dataset]

Expected behavior

It should work like in the previous versions.

Screenshots

No response

Pip/GitHub

pip

What version/branch did you use?

No response

Configuration YAML

dataset:
  name: custom
  format: folder
  path: /content/drive/MyDrive/custom
  normal_dir: good # name of the folder containing normal images.
  abnormal_dir: bad # name of the folder containing abnormal images.
  normal_test_dir: good # name of the folder containing normal test images.
  mask: #optional
  extensions: null
  task: classification
  train_batch_size: 32
  test_batch_size: 32
  num_workers: 8
  image_size: 256 # dimensions to which images are resized (mandatory)
  center_crop: 256 # dimensions to which images are center-cropped after resizing (optional)
  normalization: imagenet # data distribution to which the images will be normalized: [none, imagenet]
  transform_config:
    train: null
    eval: null
  test_split_mode: from_dir # options: [from_dir, synthetic]
  test_split_ratio: 0.2 # fraction of train images held out testing (usage depends on test_split_mode)
  val_split_mode: same_as_test # options: [same_as_test, from_test, synthetic]
  val_split_ratio: 0.5 # fraction of train/test images held out for validation (usage depends on val_split_mode)
  tiling:
    apply: false
    tile_size: null
    stride: null
    remove_border_count: 0
    use_random_tiling: False
    random_tile_count: 16

model:
  name: patchcore
  backbone: wide_resnet50_2
  pre_trained: true
  layers:
    - layer2
    - layer3
  coreset_sampling_ratio: 0.1
  num_neighbors: 9
  normalization_method: min_max # options: [null, min_max, cdf]

metrics:
  image:
    - F1Score
    - AUROC
  pixel:
    - F1Score
    - AUROC
  threshold:
    method: adaptive #options: [adaptive, manual]
    manual_image: null
    manual_pixel: null

visualization:
  show_images: False # show images on the screen
  save_images: True # save images to the file system
  log_images: True # log images to the available loggers (if any)
  image_save_path: null # path to which images will be saved
  mode: full # options: ["full", "simple"]

project:
  seed: 0
  path: ./results

logging:
  logger: [] # options: [comet, tensorboard, wandb, csv] or combinations.
  log_graph: false # Logs the model graph to respective logger.

optimization:
  export_mode: null # options: onnx, openvino

# PL Trainer Args. Don't add extra parameter here.
trainer:
  enable_checkpointing: true
  default_root_dir: null
  gradient_clip_val: 0
  gradient_clip_algorithm: norm
  num_nodes: 1
  devices: 1
  enable_progress_bar: true
  overfit_batches: 0.0
  track_grad_norm: -1
  check_val_every_n_epoch: 1 # Don't validate before extracting features.
  fast_dev_run: false
  accumulate_grad_batches: 1
  max_epochs: 1
  min_epochs: null
  max_steps: -1
  min_steps: null
  max_time: null
  limit_train_batches: 1.0
  limit_val_batches: 1.0
  limit_test_batches: 1.0
  limit_predict_batches: 1.0
  val_check_interval: 1.0 # Don't validate before extracting features.
  log_every_n_steps: 50
  accelerator: auto # <"cpu", "gpu", "tpu", "ipu", "hpu", "auto">
  strategy: null
  sync_batchnorm: false
  precision: 32
  enable_model_summary: true
  num_sanity_val_steps: 0
  profiler: null
  benchmark: false
  deterministic: false
  reload_dataloaders_every_n_epochs: 0
  auto_lr_find: false
  replace_sampler_ddp: true
  detect_anomaly: false
  auto_scale_batch_size: false
  plugins: null
  move_metrics_to_cpu: false
  multiple_trainloader_mode: max_size_cycle

Logs

Keyerror: model
[/usr/local/lib/python3.9/dist-packages/anomalib/deploy/inferencers/torch_inferencer.py](https://localhost:8080/#) in load_model(self, path)
     83         """
     84 
---> 85         model = torch.load(path, map_location=self.device)["state_dict"]
     86         model.eval()
     87         return model.to(self.device)

Code of Conduct

  • I agree to follow this project's Code of Conduct
@blaz-r
Copy link
Contributor

blaz-r commented May 11, 2023

Hello @shrinand1996. Check issue #1076 for workaround.

@blaz-r
Copy link
Contributor

blaz-r commented Jun 27, 2023

Hello @shrinand1996. This was now resolved in #1076. So check that thread if you still have this issue.
I think this issue can now also be closed.

@samet-akcay
Copy link
Contributor

As mentioned in #1076, please specify the torch export. this is because anomalib saves the models based on lightning model.

optimization:
  export_mode: torch # options: onnx, openvino, torch

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants