Skip to content

Commit

Permalink
[pre-commit.ci] pre-commit suggestions (#1247)
Browse files Browse the repository at this point in the history
* [pre-commit.ci] pre-commit suggestions

updates:
- [github.com/asottile/pyupgrade: v2.37.3 → v2.38.2](asottile/pyupgrade@v2.37.3...v2.38.2)
- https://github.com/myint/docformatterhttps://github.com/PyCQA/docformatter
- [github.com/PyCQA/docformatter: v1.4 → v1.5.0](PyCQA/docformatter@v1.4...v1.5.0)
- [github.com/psf/black: 22.6.0 → 22.8.0](psf/black@22.6.0...22.8.0)
- [github.com/executablebooks/mdformat: 0.7.14 → 0.7.16](hukkin/mdformat@0.7.14...0.7.16)
- [github.com/PyCQA/flake8: 5.0.3 → 5.0.4](PyCQA/flake8@5.0.3...5.0.4)

* docs: oneliners

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* map

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Jirka <[email protected]>
  • Loading branch information
pre-commit-ci[bot] and Borda authored Oct 4, 2022
1 parent 7794b03 commit 3eb7db0
Show file tree
Hide file tree
Showing 82 changed files with 333 additions and 453 deletions.
12 changes: 6 additions & 6 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -38,14 +38,14 @@ repos:
- id: detect-private-key

- repo: https://github.com/asottile/pyupgrade
rev: v2.37.3
rev: v2.38.2
hooks:
- id: pyupgrade
args: [--py36-plus]
name: Upgrade code

- repo: https://github.com/myint/docformatter
rev: v1.4
- repo: https://github.com/PyCQA/docformatter
rev: v1.5.0
hooks:
- id: docformatter
args: [--in-place, --wrap-summaries=115, --wrap-descriptions=120]
Expand All @@ -58,13 +58,13 @@ repos:
require_serial: false

- repo: https://github.com/psf/black
rev: 22.6.0
rev: 22.8.0
hooks:
- id: black
name: Format code

- repo: https://github.com/executablebooks/mdformat
rev: 0.7.14
rev: 0.7.16
hooks:
- id: mdformat
additional_dependencies:
Expand All @@ -83,7 +83,7 @@ repos:
- id: yesqa

- repo: https://github.com/PyCQA/flake8
rev: 5.0.3
rev: 5.0.4
hooks:
- id: flake8
name: PEP8
1 change: 0 additions & 1 deletion src/torchmetrics/audio/snr.py
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,6 @@ class SignalNoiseRatio(Metric):
References:
[1] Le Roux, Jonathan, et al. "SDR half-baked or well done." IEEE International Conference on Acoustics, Speech
and Signal Processing (ICASSP) 2019.
"""
full_state_update: bool = False
is_differentiable: bool = True
Expand Down
5 changes: 2 additions & 3 deletions src/torchmetrics/audio/stoi.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,8 @@


class ShortTimeObjectiveIntelligibility(Metric):
r"""STOI (Short-Time Objective Intelligibility, see [2,3]), a wrapper for the pystoi package [1].
Note that input will be moved to `cpu` to perform the metric calculation.
r"""STOI (Short-Time Objective Intelligibility, see [2,3]), a wrapper for the pystoi package [1]. Note that
input will be moved to `cpu` to perform the metric calculation.
Intelligibility measure which is highly correlated with the intelligibility of degraded speech signals, e.g., due
to additive noise, single-/multi-channel noise reduction, binary masking and vocoded speech as in CI simulations.
Expand Down Expand Up @@ -75,7 +75,6 @@ class ShortTimeObjectiveIntelligibility(Metric):
[4] J. Jensen and C. H. Taal, 'An Algorithm for Predicting the Intelligibility of Speech Masked by Modulated
Noise Maskers', IEEE Transactions on Audio, Speech and Language Processing, 2016.
"""
sum_stoi: Tensor
total: Tensor
Expand Down
7 changes: 3 additions & 4 deletions src/torchmetrics/classification/accuracy.py
Original file line number Diff line number Diff line change
Expand Up @@ -311,7 +311,6 @@ class MultilabelAccuracy(MultilabelStatScores):
>>> metric(preds, target)
tensor([[0.5000, 0.5000, 0.0000],
[0.0000, 0.0000, 0.5000]])
"""
is_differentiable = False
higher_is_better = True
Expand All @@ -325,9 +324,10 @@ def compute(self) -> Tensor:


class Accuracy(StatScores):
r"""
r"""Accuracy.
.. note::
From v0.10 an `'binary_*'`, `'multiclass_*', `'multilabel_*'` version now exist of each classification
From v0.10 an ``'binary_*'``, ``'multiclass_*'``, ``'multilabel_*'`` version now exist of each classification
metric. Moving forward we recommend using these versions. This base metric will still work as it did
prior to v0.10 until v0.11. From v0.11 the `task` argument introduced in this metric will be required
and the general order of arguments may change, such that this metric will just function as an single
Expand Down Expand Up @@ -455,7 +455,6 @@ class Accuracy(StatScores):
>>> accuracy = Accuracy(top_k=2)
>>> accuracy(preds, target)
tensor(0.6667)
"""
is_differentiable = False
higher_is_better = True
Expand Down
3 changes: 1 addition & 2 deletions src/torchmetrics/classification/auc.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,8 +22,7 @@


class AUC(Metric):
r"""
Computes Area Under the Curve (AUC) using the trapezoidal rule
r"""Computes Area Under the Curve (AUC) using the trapezoidal rule.
Forward accepts two input tensors that should be 1D and have the same number
of elements
Expand Down
22 changes: 9 additions & 13 deletions src/torchmetrics/classification/auroc.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,9 +40,8 @@


class BinaryAUROC(BinaryPrecisionRecallCurve):
r"""
Compute Area Under the Receiver Operating Characteristic Curve (`ROC AUC`_) for binary tasks. The AUROC score
summarizes the ROC curve into an single number that describes the performance of a model for multiple
r"""Compute Area Under the Receiver Operating Characteristic Curve (`ROC AUC`_) for binary tasks. The AUROC
score summarizes the ROC curve into an single number that describes the performance of a model for multiple
thresholds at the same time. Notably, an AUROC score of 1 is a perfect score and an AUROC score of 0.5
corresponds to random guessing.
Expand Down Expand Up @@ -119,9 +118,8 @@ def compute(self) -> Tensor:


class MulticlassAUROC(MulticlassPrecisionRecallCurve):
r"""
Compute Area Under the Receiver Operating Characteristic Curve (`ROC AUC`_) for multiclass tasks. The AUROC score
summarizes the ROC curve into an single number that describes the performance of a model for multiple
r"""Compute Area Under the Receiver Operating Characteristic Curve (`ROC AUC`_) for multiclass tasks. The AUROC
score summarizes the ROC curve into an single number that describes the performance of a model for multiple
thresholds at the same time. Notably, an AUROC score of 1 is a perfect score and an AUROC score of 0.5
corresponds to random guessing.
Expand Down Expand Up @@ -188,7 +186,6 @@ class MulticlassAUROC(MulticlassPrecisionRecallCurve):
>>> metric = MulticlassAUROC(num_classes=5, average=None, thresholds=5)
>>> metric(preds, target)
tensor([1.0000, 1.0000, 0.3333, 0.3333, 0.0000])
"""

is_differentiable: bool = False
Expand Down Expand Up @@ -221,9 +218,8 @@ def compute(self) -> Tensor:


class MultilabelAUROC(MultilabelPrecisionRecallCurve):
r"""
Compute Area Under the Receiver Operating Characteristic Curve (`ROC AUC`_) for multilabel tasks. The AUROC score
summarizes the ROC curve into an single number that describes the performance of a model for multiple
r"""Compute Area Under the Receiver Operating Characteristic Curve (`ROC AUC`_) for multilabel tasks. The AUROC
score summarizes the ROC curve into an single number that describes the performance of a model for multiple
thresholds at the same time. Notably, an AUROC score of 1 is a perfect score and an AUROC score of 0.5
corresponds to random guessing.
Expand Down Expand Up @@ -324,9 +320,10 @@ def compute(self) -> Tensor:


class AUROC(Metric):
r"""
r"""Area Under the Receiver Operating Characteristic Curve.
.. note::
From v0.10 an `'binary_*'`, `'multiclass_*', `'multilabel_*'` version now exist of each classification
From v0.10 an ``'binary_*'``, ``'multiclass_*'``, ``'multilabel_*'`` version now exist of each classification
metric. Moving forward we recommend using these versions. This base metric will still work as it did
prior to v0.10 until v0.11. From v0.11 the `task` argument introduced in this metric will be required
and the general order of arguments may change, such that this metric will just function as an single
Expand Down Expand Up @@ -403,7 +400,6 @@ class AUROC(Metric):
>>> auroc = AUROC(num_classes=3)
>>> auroc(preds, target)
tensor(0.7778)
"""
is_differentiable: bool = False
higher_is_better: bool = True
Expand Down
15 changes: 6 additions & 9 deletions src/torchmetrics/classification/average_precision.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,8 +37,7 @@


class BinaryAveragePrecision(BinaryPrecisionRecallCurve):
r"""
Computes the average precision (AP) score for binary tasks. The AP score summarizes a precision-recall curve
r"""Computes the average precision (AP) score for binary tasks. The AP score summarizes a precision-recall curve
as an weighted mean of precisions at each threshold, with the difference in recall from the previous threshold
as weight:
Expand Down Expand Up @@ -107,8 +106,7 @@ def compute(self) -> Tensor:


class MulticlassAveragePrecision(MulticlassPrecisionRecallCurve):
r"""
Computes the average precision (AP) score for binary tasks. The AP score summarizes a precision-recall curve
r"""Computes the average precision (AP) score for binary tasks. The AP score summarizes a precision-recall curve
as an weighted mean of precisions at each threshold, with the difference in recall from the previous threshold
as weight:
Expand Down Expand Up @@ -180,7 +178,6 @@ class MulticlassAveragePrecision(MulticlassPrecisionRecallCurve):
>>> metric = MulticlassAveragePrecision(num_classes=5, average=None, thresholds=5)
>>> metric(preds, target)
tensor([1.0000, 1.0000, 0.2500, 0.2500, -0.0000])
"""

is_differentiable: bool = False
Expand Down Expand Up @@ -213,8 +210,7 @@ def compute(self) -> Tensor:


class MultilabelAveragePrecision(MultilabelPrecisionRecallCurve):
r"""
Computes the average precision (AP) score for binary tasks. The AP score summarizes a precision-recall curve
r"""Computes the average precision (AP) score for binary tasks. The AP score summarizes a precision-recall curve
as an weighted mean of precisions at each threshold, with the difference in recall from the previous threshold
as weight:
Expand Down Expand Up @@ -323,9 +319,10 @@ def compute(self) -> Tensor:


class AveragePrecision(Metric):
r"""
r"""Average Precision.
.. note::
From v0.10 an `'binary_*'`, `'multiclass_*', `'multilabel_*'` version now exist of each classification
From v0.10 an ``'binary_*'``, ``'multiclass_*'``, ``'multilabel_*'`` version now exist of each classification
metric. Moving forward we recommend using these versions. This base metric will still work as it did
prior to v0.10 until v0.11. From v0.11 the `task` argument introduced in this metric will be required
and the general order of arguments may change, such that this metric will just function as an single
Expand Down
15 changes: 8 additions & 7 deletions src/torchmetrics/classification/calibration_error.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,8 +36,8 @@

class BinaryCalibrationError(Metric):
r"""`Computes the Top-label Calibration Error`_ for binary tasks. The expected calibration error can be used to
quantify how well a given model is calibrated e.g. how well the predicted output probabilities of the model matches
the actual probabilities of the ground truth distribution.
quantify how well a given model is calibrated e.g. how well the predicted output probabilities of the model
matches the actual probabilities of the ground truth distribution.
Three different norms are implemented, each corresponding to variations on the calibration error metric.
Expand Down Expand Up @@ -126,9 +126,9 @@ def compute(self) -> Tensor:


class MulticlassCalibrationError(Metric):
r"""`Computes the Top-label Calibration Error`_ for multiclass tasks. The expected calibration error can be used to
quantify how well a given model is calibrated e.g. how well the predicted output probabilities of the model matches
the actual probabilities of the ground truth distribution.
r"""`Computes the Top-label Calibration Error`_ for multiclass tasks. The expected calibration error can be used
to quantify how well a given model is calibrated e.g. how well the predicted output probabilities of the model
matches the actual probabilities of the ground truth distribution.
Three different norms are implemented, each corresponding to variations on the calibration error metric.
Expand Down Expand Up @@ -223,9 +223,10 @@ def compute(self) -> Tensor:


class CalibrationError(Metric):
r"""
r"""Calibration Error.
.. note::
From v0.10 an `'binary_*'`, `'multiclass_*', `'multilabel_*'` version now exist of each classification
From v0.10 an ``'binary_*'``, ``'multiclass_*'``, ``'multilabel_*'`` version now exist of each classification
metric. Moving forward we recommend using these versions. This base metric will still work as it did
prior to v0.10 until v0.11. From v0.11 the `task` argument introduced in this metric will be required
and the general order of arguments may change, such that this metric will just function as an single
Expand Down
16 changes: 7 additions & 9 deletions src/torchmetrics/classification/cohen_kappa.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,8 +30,8 @@


class BinaryCohenKappa(BinaryConfusionMatrix):
r"""Calculates `Cohen's kappa score`_ that measures inter-annotator agreement for binary
tasks. It is defined as
r"""Calculates `Cohen's kappa score`_ that measures inter-annotator agreement for binary tasks. It is defined
as.
.. math::
\kappa = (p_o - p_e) / (1 - p_e)
Expand Down Expand Up @@ -79,7 +79,6 @@ class labels.
>>> metric = BinaryCohenKappa()
>>> metric(preds, target)
tensor(0.5000)
"""
is_differentiable: bool = False
higher_is_better: bool = True
Expand All @@ -104,8 +103,8 @@ def compute(self) -> Tensor:


class MulticlassCohenKappa(MulticlassConfusionMatrix):
r"""Calculates `Cohen's kappa score`_ that measures inter-annotator agreement for multiclass
tasks. It is defined as
r"""Calculates `Cohen's kappa score`_ that measures inter-annotator agreement for multiclass tasks. It is
defined as.
.. math::
\kappa = (p_o - p_e) / (1 - p_e)
Expand Down Expand Up @@ -158,7 +157,6 @@ class labels.
>>> metric = MulticlassCohenKappa(num_classes=3)
>>> metric(preds, target)
tensor(0.6364)
"""
is_differentiable: bool = False
higher_is_better: bool = True
Expand All @@ -183,9 +181,10 @@ def compute(self) -> Tensor:


class CohenKappa(Metric):
r"""
r"""Cohen Kappa.
.. note::
From v0.10 an `'binary_*'`, `'multiclass_*', `'multilabel_*'` version now exist of each classification
From v0.10 an ``'binary_*'``, ``'multiclass_*'``, ``'multilabel_*'`` version now exist of each classification
metric. Moving forward we recommend using these versions. This base metric will still work as it did
prior to v0.10 until v0.11. From v0.11 the `task` argument introduced in this metric will be required
and the general order of arguments may change, such that this metric will just function as an single
Expand Down Expand Up @@ -235,7 +234,6 @@ class labels.
>>> cohenkappa = CohenKappa(num_classes=2)
>>> cohenkappa(preds, target)
tensor(0.5000)
"""
is_differentiable: bool = False
higher_is_better: bool = True
Expand Down
15 changes: 6 additions & 9 deletions src/torchmetrics/classification/confusion_matrix.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,8 +41,7 @@


class BinaryConfusionMatrix(Metric):
r"""
Computes the `confusion matrix`_ for binary tasks.
r"""Computes the `confusion matrix`_ for binary tasks.
Accepts the following input tensors:
Expand Down Expand Up @@ -129,8 +128,7 @@ def compute(self) -> Tensor:


class MulticlassConfusionMatrix(Metric):
r"""
Computes the `confusion matrix`_ for multiclass tasks.
r"""Computes the `confusion matrix`_ for multiclass tasks.
Accepts the following input tensors:
Expand Down Expand Up @@ -224,8 +222,7 @@ def compute(self) -> Tensor:


class MultilabelConfusionMatrix(Metric):
r"""
Computes the `confusion matrix`_ for multilabel tasks.
r"""Computes the `confusion matrix`_ for multilabel tasks.
Accepts the following input tensors:
Expand Down Expand Up @@ -319,9 +316,10 @@ def compute(self) -> Tensor:


class ConfusionMatrix(Metric):
r"""
r"""Confusion Matrix.
.. note::
From v0.10 an `'binary_*'`, `'multiclass_*', `'multilabel_*'` version now exist of each classification
From v0.10 an ``'binary_*'``, ``'multiclass_*'``, ``'multilabel_*'`` version now exist of each classification
metric. Moving forward we recommend using these versions. This base metric will still work as it did
prior to v0.10 until v0.11. From v0.11 the `task` argument introduced in this metric will be required
and the general order of arguments may change, such that this metric will just function as an single
Expand Down Expand Up @@ -389,7 +387,6 @@ class ConfusionMatrix(Metric):
tensor([[[1, 0], [0, 1]],
[[1, 0], [1, 0]],
[[0, 1], [0, 1]]])
"""
is_differentiable: bool = False
higher_is_better: Optional[bool] = None
Expand Down
1 change: 0 additions & 1 deletion src/torchmetrics/classification/dice.py
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,6 @@ class Dice(StatScores):
>>> dice = Dice(average='micro')
>>> dice(preds, target)
tensor(0.2500)
"""
is_differentiable: bool = False
higher_is_better: bool = True
Expand Down
4 changes: 2 additions & 2 deletions src/torchmetrics/classification/exact_match.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,8 +31,8 @@


class MultilabelExactMatch(Metric):
r"""Computes Exact match (also known as subset accuracy) for multilabel tasks. Exact Match is a stricter
version of accuracy where all labels have to match exactly for the sample to be correctly classified.
r"""Computes Exact match (also known as subset accuracy) for multilabel tasks. Exact Match is a stricter version
of accuracy where all labels have to match exactly for the sample to be correctly classified.
Accepts the following input tensors:
Expand Down
Loading

0 comments on commit 3eb7db0

Please sign in to comment.