Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor anomalib to new annotation format, add refurb and pyupgrade #845

Merged
merged 87 commits into from
Jan 26, 2023
Merged
Show file tree
Hide file tree
Changes from 80 commits
Commits
Show all changes
87 commits
Select commit Hold shift + click to select a range
d10f166
Add missing type hints to export
samet-akcay Jan 16, 2023
201a335
Add missing type hints to inferencer interfaces
samet-akcay Jan 16, 2023
1b4f2a0
Add missing type hints to post-processing modules
samet-akcay Jan 16, 2023
039da47
Add missing type hints to pre-processing modules
samet-akcay Jan 16, 2023
f7c8db5
Add missing type hints to nncf callback
samet-akcay Jan 16, 2023
66bda3c
Add missing type hints to visualizer callbacks
samet-akcay Jan 16, 2023
536078b
Add missing type hints to cdf normalization callbacks
samet-akcay Jan 16, 2023
cfd86fa
Add missing type hints to export callback
samet-akcay Jan 16, 2023
495a9bf
Add missing type hints to metrics configuration callback
samet-akcay Jan 16, 2023
6c394d0
Add missing type hints to min-max normalization configuration callback
samet-akcay Jan 16, 2023
075b2fb
Add missing type hints to init callbacks
samet-akcay Jan 16, 2023
61beeae
Add missing type hints to model loader callback
samet-akcay Jan 16, 2023
fc448dd
Add missing type hints to post processing callback
samet-akcay Jan 16, 2023
96b2092
Add missing type hints to tiler configuration callback
samet-akcay Jan 16, 2023
8ee2613
Add missing type hints to timer callback
samet-akcay Jan 16, 2023
3ab7cdc
Add missing type hints to utils
samet-akcay Jan 16, 2023
a670e4b
Merge branch 'main' of github.com:openvinotoolkit/anomalib into refac…
samet-akcay Jan 16, 2023
e4bdb46
Refactored datamodule
samet-akcay Jan 16, 2023
415e76b
Add missing type hints to dataset
samet-akcay Jan 16, 2023
e7a620e
Add missing type hints to download
samet-akcay Jan 16, 2023
7e521da
Add missing type hints to folder dataset
samet-akcay Jan 16, 2023
2b22dab
Add missing type hints to mvtec dataset
samet-akcay Jan 16, 2023
a8a167d
Add missing type hints to mvtec dataset
samet-akcay Jan 16, 2023
c87f582
Changed method signature of forward in AnomalyModule
samet-akcay Jan 16, 2023
b05d20f
Changed method signature of validation step
samet-akcay Jan 16, 2023
1226ecc
Add type hints to cflow
samet-akcay Jan 16, 2023
92f4335
Add type hints to csflow
samet-akcay Jan 16, 2023
b654982
Add type hints to dfkde
samet-akcay Jan 16, 2023
0f62690
Add type hints to dfm
samet-akcay Jan 16, 2023
8cb435c
Add type hints to dfm
samet-akcay Jan 16, 2023
9e8e37a
Add type hints to draem
samet-akcay Jan 16, 2023
7fb21d3
Add type hints to fastflow
samet-akcay Jan 16, 2023
f78cf4a
Add type hints to ganmaly
samet-akcay Jan 16, 2023
af51361
Add type hints to padim and patchcore
samet-akcay Jan 16, 2023
fa679f0
Add type hints to the rest of the models
samet-akcay Jan 16, 2023
6667af7
Add type hints to stfpm
samet-akcay Jan 16, 2023
c9ea131
Add type hints to validation step
samet-akcay Jan 17, 2023
228eaed
Add type hints to validation step
samet-akcay Jan 17, 2023
f50886a
Add type hints to cfa validation step
samet-akcay Jan 17, 2023
2f1deaa
Adjust the type hint for the rest of the models
samet-akcay Jan 17, 2023
76df2ee
Fix the type hint for self.loss
samet-akcay Jan 17, 2023
9e6d4af
Metrics are updated
samet-akcay Jan 17, 2023
8a1e39b
Run pyupgrade for the first time.
samet-akcay Jan 17, 2023
0d246ad
Edited config to the new annotation
samet-akcay Jan 17, 2023
00b45cb
Converted video.py to new annotation format
samet-akcay Jan 17, 2023
86e55ce
Converted avenue.py to new annotation format
samet-akcay Jan 17, 2023
591b74a
Converted btech.py to new annotation format
samet-akcay Jan 17, 2023
aec7494
Convert to new annotation format - folder
samet-akcay Jan 17, 2023
ee2f284
Convert to new annotation format - inference data
samet-akcay Jan 17, 2023
507ddf3
Convert to new annotation format - init data
samet-akcay Jan 17, 2023
c78ccb1
Convert to new annotation format - mvtecdata
samet-akcay Jan 17, 2023
aedfeed
Convert to new annotation format - synthetic data
samet-akcay Jan 17, 2023
d0e507b
Convert to new annotation format - ucsd data
samet-akcay Jan 17, 2023
78f2e52
Convert to new annotation format - visa data
samet-akcay Jan 17, 2023
048c4f6
Convert to new annotation format - inferencer
samet-akcay Jan 17, 2023
24bccbe
Convert to new annotation format - cfa model
samet-akcay Jan 17, 2023
5cd724c
Convert to new annotation format - cflow model
samet-akcay Jan 17, 2023
4df6383
Convert to new annotation format - model commponents
samet-akcay Jan 17, 2023
b94c876
Convert to new annotation format - cflow model
samet-akcay Jan 17, 2023
c544759
Convert to new annotation format - dfkde model
samet-akcay Jan 17, 2023
fae5fb6
Convert to new annotation format - dfm model
samet-akcay Jan 17, 2023
de6c411
Convert to new annotation format - draem model
samet-akcay Jan 17, 2023
aa4959e
Convert to new annotation format - fastflow model
samet-akcay Jan 17, 2023
02ad225
Convert to new annotation format - ganomaly model
samet-akcay Jan 17, 2023
8ce58a9
Convert to new annotation format - padim model
samet-akcay Jan 17, 2023
0ea8e08
Convert to new annotation format - patchcore model
samet-akcay Jan 17, 2023
71f8f99
Convert to new annotation format - reverse distillation model
samet-akcay Jan 17, 2023
fe44098
Convert to new annotation format - rkde model
samet-akcay Jan 17, 2023
5ad22c4
Convert to new annotation format - stfpm model
samet-akcay Jan 17, 2023
e99399c
Convert to new annotation format - data utils
samet-akcay Jan 17, 2023
bb7efdb
Convert to new annotation format - deploy utils
samet-akcay Jan 17, 2023
7faf3a8
Convert to new annotation format - model utils
samet-akcay Jan 17, 2023
97770a3
Convert to new annotation format - post processing utils
samet-akcay Jan 17, 2023
b8be9bd
Convert to new annotation format - pre processing utils
samet-akcay Jan 17, 2023
72f483c
Convert to new annotation format - model utils
samet-akcay Jan 17, 2023
734f059
Convert to new annotation format - callbacks utils
samet-akcay Jan 17, 2023
cf5155b
Convert to new annotation format - utils
samet-akcay Jan 17, 2023
1d48952
Change the method signature in csflow
samet-akcay Jan 17, 2023
b89da66
Fix pre-commit
samet-akcay Jan 17, 2023
16ece78
Fix ganomaly errors
samet-akcay Jan 18, 2023
b238fad
Update anomalib/models/reverse_distillation/components/bottleneck.py
samet-akcay Jan 20, 2023
fcd2176
Address the PR comments.
samet-akcay Jan 23, 2023
6353730
Merge branch 'refactor/add-pyupgrade-and-refurb' of github.com:openvi…
samet-akcay Jan 23, 2023
abb9900
Address refurb comments
samet-akcay Jan 24, 2023
d47e149
add exits_ok=True to address failed tests
samet-akcay Jan 24, 2023
7be9eb5
Fix tests
samet-akcay Jan 24, 2023
936e0ca
Update CHANGELOG.md
samet-akcay Jan 26, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,11 @@ repos:
types: [python]
exclude: "tests|docs"

- repo: https://github.com/asottile/pyupgrade
rev: v3.3.1
hooks:
- id: pyupgrade

# notebooks.
- repo: https://github.com/nbQA-dev/nbQA
rev: 1.4.0
Expand Down
53 changes: 27 additions & 26 deletions anomalib/config/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,11 @@
# TODO: This would require a new design.
# TODO: https://jira.devtools.intel.com/browse/IAAALD-149

from __future__ import annotations

import time
from datetime import datetime
from pathlib import Path
from typing import List, Optional, Union
from warnings import warn

from omegaconf import DictConfig, ListConfig, OmegaConf
Expand All @@ -22,17 +23,17 @@ def _get_now_str(timestamp: float) -> str:
return datetime.fromtimestamp(timestamp).strftime("%Y-%m-%d_%H-%M-%S")


def update_input_size_config(config: Union[DictConfig, ListConfig]) -> Union[DictConfig, ListConfig]:
def update_input_size_config(config: DictConfig | ListConfig) -> DictConfig | ListConfig:
"""Update config with image size as tuple, effective input size and tiling stride.

Convert integer image size parameters into tuples, calculate the effective input size based on image size
and crop size, and set tiling stride if undefined.

Args:
config (Union[DictConfig, ListConfig]): Configurable parameters object
config (DictConfig | ListConfig): Configurable parameters object

Returns:
Union[DictConfig, ListConfig]: Configurable parameters with updated values
DictConfig | ListConfig: Configurable parameters with updated values
"""
# Image size: Ensure value is in the form [height, width]
image_size = config.dataset.get("image_size")
Expand Down Expand Up @@ -65,14 +66,14 @@ def update_input_size_config(config: Union[DictConfig, ListConfig]) -> Union[Dic
return config


def update_nncf_config(config: Union[DictConfig, ListConfig]) -> Union[DictConfig, ListConfig]:
def update_nncf_config(config: DictConfig | ListConfig) -> DictConfig | ListConfig:
"""Set the NNCF input size based on the value of the crop_size parameter in the configurable parameters object.

Args:
config (Union[DictConfig, ListConfig]): Configurable parameters of the current run.
Args
config (DictConfig | ListConfig): Configurable parameters of the current run.

Returns:
Union[DictConfig, ListConfig]: Updated configurable parameters in DictConfig object.
DictConfig | ListConfig: Updated configurable parameters in DictConfig object.
"""
crop_size = config.dataset.image_size
sample_size = (crop_size, crop_size) if isinstance(crop_size, int) else crop_size
Expand All @@ -87,19 +88,19 @@ def update_nncf_config(config: Union[DictConfig, ListConfig]) -> Union[DictConfi
return config


def update_multi_gpu_training_config(config: Union[DictConfig, ListConfig]) -> Union[DictConfig, ListConfig]:
def update_multi_gpu_training_config(config: DictConfig | ListConfig) -> DictConfig | ListConfig:
"""Updates the config to change learning rate based on number of gpus assigned.

Current behaviour is to ensure only ddp accelerator is used.

Args:
config (Union[DictConfig, ListConfig]): Configurable parameters for the current run
config (DictConfig | ListConfig): Configurable parameters for the current run

Raises:
ValueError: If unsupported accelerator is passed

Returns:
Union[DictConfig, ListConfig]: Updated config
DictConfig | ListConfig: Updated config
"""
# validate accelerator
if config.trainer.accelerator is not None:
Expand All @@ -119,22 +120,22 @@ def update_multi_gpu_training_config(config: Union[DictConfig, ListConfig]) -> U
# increase the learning rate by the number of devices
if "lr" in config.model:
# Number of GPUs can either be passed as gpus: 2 or gpus: [0,1]
n_gpus: Union[int, List] = 1
n_gpus: int | list = 1
if "trainer" in config and "gpus" in config.trainer:
n_gpus = config.trainer.gpus
lr_scaler = n_gpus if isinstance(n_gpus, int) else len(n_gpus)
config.model.lr = config.model.lr * lr_scaler
return config


def update_datasets_config(config: Union[DictConfig, ListConfig]) -> Union[DictConfig, ListConfig]:
def update_datasets_config(config: DictConfig | ListConfig) -> DictConfig | ListConfig:
"""Updates the dataset section of the config.

Args:
config (Union[DictConfig, ListConfig]): Configurable parameters for the current run.
config (DictConfig | ListConfig): Configurable parameters for the current run.

Returns:
Union[DictConfig, ListConfig]: Updated config
DictConfig | ListConfig: Updated config
"""
if "format" not in config.dataset.keys():
config.dataset.format = "mvtec"
Expand Down Expand Up @@ -200,23 +201,23 @@ def update_datasets_config(config: Union[DictConfig, ListConfig]) -> Union[DictC


def get_configurable_parameters(
model_name: Optional[str] = None,
config_path: Optional[Union[Path, str]] = None,
weight_file: Optional[str] = None,
config_filename: Optional[str] = "config",
config_file_extension: Optional[str] = "yaml",
) -> Union[DictConfig, ListConfig]:
model_name: str | None = None,
config_path: Path | str | None = None,
weight_file: str | None = None,
config_filename: str | None = "config",
config_file_extension: str | None = "yaml",
) -> DictConfig | ListConfig:
"""Get configurable parameters.

Args:
model_name: Optional[str]: (Default value = None)
config_path: Optional[Union[Path, str]]: (Default value = None)
model_name: str | None: (Default value = None)
config_path: Path | str | None: (Default value = None)
weight_file: Path to the weight file
config_filename: Optional[str]: (Default value = "config")
config_file_extension: Optional[str]: (Default value = "yaml")
config_filename: str | None: (Default value = "config")
config_file_extension: str | None: (Default value = "yaml")

Returns:
Union[DictConfig, ListConfig]: Configurable parameters in DictConfig object.
DictConfig | ListConfig: Configurable parameters in DictConfig object.
"""
if model_name is None and config_path is None:
raise ValueError(
Expand Down
7 changes: 4 additions & 3 deletions anomalib/data/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,9 @@
# Copyright (C) 2022 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

from __future__ import annotations

import logging
from typing import Union

from omegaconf import DictConfig, ListConfig

Expand All @@ -21,11 +22,11 @@
logger = logging.getLogger(__name__)


def get_datamodule(config: Union[DictConfig, ListConfig]) -> AnomalibDataModule:
def get_datamodule(config: DictConfig | ListConfig) -> AnomalibDataModule:
"""Get Anomaly Datamodule.

Args:
config (Union[DictConfig, ListConfig]): Configuration of the anomaly model.
config (DictConfig | ListConfig): Configuration of the anomaly model.

Returns:
PyTorch Lightning DataModule
Expand Down
51 changes: 26 additions & 25 deletions anomalib/data/avenue.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,18 +13,19 @@
# Copyright (C) 2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

from __future__ import annotations

import logging
import math
from pathlib import Path
from shutil import move
from typing import Callable, Optional, Tuple, Union
from typing import Callable

import albumentations as A
import cv2
import numpy as np
import scipy.io
from pandas import DataFrame
from torch import Tensor

from anomalib.data.base import AnomalibVideoDataModule, AnomalibVideoDataset
from anomalib.data.task_type import TaskType
Expand Down Expand Up @@ -52,7 +53,7 @@
)


def make_avenue_dataset(root: Path, gt_dir: Path, split: Optional[Union[Split, str]] = None) -> DataFrame:
def make_avenue_dataset(root: Path, gt_dir: Path, split: Split | str | None = None) -> DataFrame:
"""Create CUHK Avenue dataset by parsing the file structure.

The files are expected to follow the structure:
Expand All @@ -62,7 +63,7 @@ def make_avenue_dataset(root: Path, gt_dir: Path, split: Optional[Union[Split, s
Args:
root (Path): Path to dataset
gt_dir (Path): Path to the ground truth
split (Optional[Union[Split, str]], optional): Dataset split (ie., either train or test). Defaults to None.
split (Split | str | None = None, optional): Dataset split (ie., either train or test). Defaults to None.

Example:
The following example shows how to get testing samples from Avenue dataset:
Expand Down Expand Up @@ -106,7 +107,7 @@ def make_avenue_dataset(root: Path, gt_dir: Path, split: Optional[Union[Split, s
class AvenueClipsIndexer(ClipsIndexer):
"""Clips class for UCSDped dataset."""

def get_mask(self, idx) -> Optional[Tensor]:
def get_mask(self, idx) -> np.ndarray | None:
"""Retrieve the masks from the file system."""

video_idx, frames_idx = self.get_clip_location(idx)
Expand All @@ -133,32 +134,32 @@ class AvenueDataset(AnomalibVideoDataset):

Args:
task (TaskType): Task type, 'classification', 'detection' or 'segmentation'
root (str): Path to the root of the dataset
root (Path | str): Path to the root of the dataset
gt_dir (str): Path to the ground truth files
transform (A.Compose): Albumentations Compose object describing the transforms that are applied to the inputs.
split (Optional[Union[Split, str]]): Split of the dataset, usually Split.TRAIN or Split.TEST
split (Split): Split of the dataset, usually Split.TRAIN or Split.TEST
clip_length_in_frames (int, optional): Number of video frames in each clip.
frames_between_clips (int, optional): Number of frames between each consecutive video clip.
"""

def __init__(
self,
task: TaskType,
root: Union[Path, str],
root: Path | str,
gt_dir: str,
transform: A.Compose,
split: Split,
clip_length_in_frames: int = 1,
frames_between_clips: int = 1,
):
) -> None:
super().__init__(task, transform, clip_length_in_frames, frames_between_clips)

self.root = root
self.gt_dir = gt_dir
self.root = root if isinstance(root, Path) else Path(root)
self.gt_dir = Path(gt_dir)
self.split = split
self.indexer_cls: Callable = AvenueClipsIndexer

def _setup(self):
def _setup(self) -> None:
"""Create and assign samples."""
self.samples = make_avenue_dataset(self.root, self.gt_dir, self.split)

Expand All @@ -172,23 +173,23 @@ class Avenue(AnomalibVideoDataModule):
clip_length_in_frames (int, optional): Number of video frames in each clip.
frames_between_clips (int, optional): Number of frames between each consecutive video clip.
task TaskType): Task type, 'classification', 'detection' or 'segmentation'
image_size (Optional[Union[int, Tuple[int, int]]], optional): Size of the input image.
image_size (int | tuple[int, int] | None, optional): Size of the input image.
djdameln marked this conversation as resolved.
Show resolved Hide resolved
Defaults to None.
center_crop (Optional[Union[int, Tuple[int, int]]], optional): When provided, the images will be center-cropped
center_crop (int | tuple[int, int] | None, optional): When provided, the images will be center-cropped
to the provided dimensions.
normalize (bool): When True, the images will be normalized to the ImageNet statistics.
train_batch_size (int, optional): Training batch size. Defaults to 32.
eval_batch_size (int, optional): Test batch size. Defaults to 32.
num_workers (int, optional): Number of workers. Defaults to 8.
transform_config_train (Optional[Union[str, A.Compose]], optional): Config for pre-processing
transform_config_train (str | A.Compose | None, optional): Config for pre-processing
during training.
Defaults to None.
transform_config_val (Optional[Union[str, A.Compose]], optional): Config for pre-processing
transform_config_val (str | A.Compose | None, optional): Config for pre-processing
during validation.
Defaults to None.
val_split_mode (ValSplitMode): Setting that determines how the validation subset is obtained.
val_split_ratio (float): Fraction of train or test images that will be reserved for validation.
seed (Optional[int], optional): Seed which may be set to a fixed value for reproducibility.
seed (int | None, optional): Seed which may be set to a fixed value for reproducibility.
"""

def __init__(
Expand All @@ -198,18 +199,18 @@ def __init__(
clip_length_in_frames: int = 1,
frames_between_clips: int = 1,
task: TaskType = TaskType.SEGMENTATION,
image_size: Optional[Union[int, Tuple[int, int]]] = None,
center_crop: Optional[Union[int, Tuple[int, int]]] = None,
normalization: Union[InputNormalizationMethod, str] = InputNormalizationMethod.IMAGENET,
image_size: int | tuple[int, int] | None = None,
center_crop: int | tuple[int, int] | None = None,
normalization: str | InputNormalizationMethod = InputNormalizationMethod.IMAGENET,
train_batch_size: int = 32,
eval_batch_size: int = 32,
num_workers: int = 8,
transform_config_train: Optional[Union[str, A.Compose]] = None,
transform_config_eval: Optional[Union[str, A.Compose]] = None,
transform_config_train: str | A.Compose | None = None,
transform_config_eval: str | A.Compose | None = None,
val_split_mode: ValSplitMode = ValSplitMode.FROM_TEST,
val_split_ratio: float = 0.5,
seed: Optional[int] = None,
):
seed: int | None = None,
) -> None:
super().__init__(
train_batch_size=train_batch_size,
eval_batch_size=eval_batch_size,
Expand Down Expand Up @@ -275,7 +276,7 @@ def prepare_data(self) -> None:
self._convert_masks(self.gt_dir)

@staticmethod
def _convert_masks(gt_dir: Path):
def _convert_masks(gt_dir: Path) -> None:
"""Convert mask files to .png.

The masks in the Avenue datasets are provided as matlab (.mat) files. To speed up data loading, we convert the
Expand Down
Loading