Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deprecate OptimalF1 metric #796

Merged
merged 2 commits into from
Dec 16, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,10 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
- 📊 Update DFM results (<https://github.com/openvinotoolkit/anomalib/pull/674>)
- Optimize anomaly score calculation for PatchCore (<https://github.com/openvinotoolkit/anomalib/pull/633>)

### Deprecated

- Deprecate OptimalF1 metric in favor of AnomalyScoreThreshold and F1Score (<https://github.com/openvinotoolkit/anomalib/pull/796>)

### Fixed

- Fix PatchCore performance deterioration by reverting changes to Average Pooling layer (<https://github.com/openvinotoolkit/anomalib/pull/791>)
Expand Down
9 changes: 9 additions & 0 deletions anomalib/utils/metrics/optimal_f1.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@
# Copyright (C) 2022 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

import warnings

import torch
from torchmetrics import Metric, PrecisionRecallCurve

Expand All @@ -17,6 +19,13 @@ class OptimalF1(Metric):
full_state_update: bool = False

def __init__(self, num_classes: int, **kwargs):
warnings.warn(
DeprecationWarning(
"OptimalF1 metric is deprecated and will be removed in a future release. The optimal F1 score for "
"Anomalib predictions can be obtained by computing the adaptive threshold with the "
"AnomalyScoreThreshold metric and setting the computed threshold value in TorchMetrics F1Score metric."
)
)
super().__init__(**kwargs)

self.precision_recall_curve = PrecisionRecallCurve(num_classes=num_classes)
Expand Down