Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference + Visualization #390

Merged
merged 16 commits into from
Jul 1, 2022
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -159,7 +159,7 @@ The new CLI approach offers a lot more flexibility, details of which are explain

## Inference
### ⚠️ Anomalib < v.0.4.0
Anomalib contains several tools that can be used to perform inference with a trained model. The script in [`tools/inference`](tools/inference.py) contains an example of how the inference tools can be used to generate a prediction for an input image.
Anomalib contains several tools that can be used to perform inference with a trained model. The script in [`tools/inference`](tools/inference/lightning.py) contains an example of how the inference tools can be used to generate a prediction for an input image.

If the specified weight path points to a PyTorch Lightning checkpoint file (`.ckpt`), inference will run in PyTorch. If the path points to an ONNX graph (`.onnx`) or OpenVINO IR (`.bin` or `.xml`), inference will run in OpenVINO.

Expand Down
2 changes: 1 addition & 1 deletion anomalib/utils/callbacks/model_loader.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ def on_test_start(self, _trainer, pl_module: AnomalyModule) -> None: # pylint:
pl_module.load_state_dict(torch.load(self.weights_path)["state_dict"])

def on_predict_start(self, _trainer, pl_module: AnomalyModule) -> None:
"""Call when inferebce begins.
"""Call when inference begins.

Loads the model weights from ``weights_path`` into the PyTorch module.
"""
Expand Down
38 changes: 21 additions & 17 deletions docs/source/guides/inference.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,33 +5,37 @@ Inference
Anomalib provides entrypoint scripts for using a trained model to generate predictions from a source of image data. This guide explains how to run inference with the standard PyTorch model and the exported OpenVINO model.


Torch Inference
PyTorch (Lightning) Inference
==============
The entrypoint script in ``tools/inference.py`` can be used to run inference with a trained PyTorch model. The entrypoint script has several command line arguments that can be used to configure inference:

+-------------+----------+-------------------------------------------------------------------------------------+
| Parameter | Required | Description |
+=============+==========+=====================================================================================+
| config | True | Path to the model config file. |
+-------------+----------+-------------------------------------------------------------------------------------+
| weight_path | True | Path to the ``.ckpt`` model checkpoint file. |
+-------------+----------+-------------------------------------------------------------------------------------+
| image_path | True | Path to the image source. This can be a single image or a folder of images. |
+-------------+----------+-------------------------------------------------------------------------------------+
| save_data | False | Path to which the output images should be saved. Leave empty for live visualization.|
+-------------+----------+-------------------------------------------------------------------------------------+
The entrypoint script in ``tools/inference/lightning.py`` can be used to run inference with a trained PyTorch model. The script runs inference by loading a previously trained model into a PyTorch Lightning trainer and running the ``predict sequence``. The entrypoint script has several command line arguments that can be used to configure inference:

+---------------------+----------+-------------------------------------------------------------------------------------+
| Parameter | Required | Description |
+=====================+==========+=====================================================================================+
| config | True | Path to the model config file. |
+---------------------+----------+-------------------------------------------------------------------------------------+
| weight_path | True | Path to the ``.ckpt`` model checkpoint file. |
+---------------------+----------+-------------------------------------------------------------------------------------+
| image_path | True | Path to the image source. This can be a single image or a folder of images. |
+---------------------+----------+-------------------------------------------------------------------------------------+
| save_path | False | Path to which the output images should be saved. |
+---------------------+----------+-------------------------------------------------------------------------------------+
| visualization_mode | False | Determines how the inference results are visualized. Options: "full", "simple". |
+---------------------+----------+-------------------------------------------------------------------------------------+
| disable_show_images | False | When this flag is passed, visualizations will not be shown on the screen. |
+---------------------+----------+-------------------------------------------------------------------------------------+

To run inference, call the script from the command line with the with the following parameters, e.g.:

``python tools/inference.py --config padim.yaml --weight_path results/weights/model.ckpt --image_path image.png``
``python tools/inference/lightning.py --config padim.yaml --weight_path results/weights/model.ckpt --image_path image.png``

This will run inference on the specified image file or all images in the folder. A visualization of the inference results including the predicted heatmap and segmentation results (if applicable), will be displayed on the screen, like the example below.



OpenVINO Inference
==============
To run OpenVINO inference, first make sure that your model has been exported to the OpenVINO IR format. Once the model has been exported, OpenVINO inference can be triggered by running the OpenVINO entrypoint script in ``tools/openvino.py``. The command line arguments are very similar to PyTorch inference entrypoint script:
To run OpenVINO inference, first make sure that your model has been exported to the OpenVINO IR format. Once the model has been exported, OpenVINO inference can be triggered by running the OpenVINO entrypoint script in ``tools/inference/openvino.py``. The command line arguments are very similar to PyTorch inference entrypoint script:

+-------------+----------+-------------------------------------------------------------------------------------+
| Parameter | Required | Description |
Expand All @@ -52,6 +56,6 @@ For correct inference results, the ``meta_data`` argument should be specified an

As an example, OpenVINO inference can be triggered by the following command:

``python tools/openvino.py --config padim.yaml --weight_path results/openvino/model.xml --image_path image.png --meta_data results/openvino/meta_data.json``
``python tools/inference/openvino.py --config padim.yaml --weight_path results/openvino/model.xml --image_path image.png --meta_data results/openvino/meta_data.json``

Similar to PyTorch inference, the visualization results will be displayed on the screen, and optionally saved to the file system location specified by the ``save_data`` parameter.
Original file line number Diff line number Diff line change
Expand Up @@ -98,13 +98,13 @@ def get_inferencer(config_path: Path, weight_path: Path, meta_data_path: Optiona
inferencer: Inferencer
if extension in (".ckpt"):
module = import_module("anomalib.deploy.inferencers.torch")
TorchInferencer = getattr(module, "TorchInferencer")
inferencer = TorchInferencer(config=config, model_source=weight_path, meta_data_path=meta_data_path)
torch_inferencer = getattr(module, "TorchInferencer")
ashwinvaidya17 marked this conversation as resolved.
Show resolved Hide resolved
inferencer = torch_inferencer(config=config, model_source=weight_path, meta_data_path=meta_data_path)

elif extension in (".onnx", ".bin", ".xml"):
module = import_module("anomalib.deploy.inferencers.openvino")
OpenVINOInferencer = getattr(module, "OpenVINOInferencer")
inferencer = OpenVINOInferencer(config=config, path=weight_path, meta_data_path=meta_data_path)
openvino_inferencer = getattr(module, "OpenVINOInferencer")
inferencer = openvino_inferencer(config=config, path=weight_path, meta_data_path=meta_data_path)

else:
raise ValueError(
Expand Down
3 changes: 3 additions & 0 deletions tools/inference.py → tools/inference/lightning.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,8 @@
"""Inference Entrypoint script."""

# Copyright (C) 2022 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

import warnings
from argparse import ArgumentParser, Namespace
from pathlib import Path
Expand Down
File renamed without changes.