Skip to content

Commit

Permalink
prepare 0.6 RC
Browse files Browse the repository at this point in the history
  • Loading branch information
Borda committed Oct 25, 2021
1 parent 3664c7b commit c9c50f7
Show file tree
Hide file tree
Showing 5 changed files with 17 additions and 53 deletions.
64 changes: 14 additions & 50 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,67 +6,38 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

**Note: we move fast, but still we preserve 0.1 version (one feature release) back compatibility.**

## [unReleased] - 2021-MM-DD
## [0.6.0] - 2021-10-DD

### Added

- Added Learned Perceptual Image Patch Similarity (LPIPS) ([#431](https://github.com/PyTorchLightning/metrics/issues/431))


- Added Tweedie Deviance Score ([#499](https://github.com/PyTorchLightning/metrics/pull/499))


- Added audio metrics:
- Perceptual Evaluation of Speech Quality (PESQ) ([#353](https://github.com/PyTorchLightning/metrics/issues/353))
- Short Term Objective Intelligibility (STOI) ([#353](https://github.com/PyTorchLightning/metrics/issues/353))
- Added Information retrieval metrics:
- `RetrievalRPrecision` ([#577](https://github.com/PyTorchLightning/metrics/pull/577/))
- `RetrievalHitRate` ([#576](https://github.com/PyTorchLightning/metrics/pull/576))
- Added NLP metrics:
- `SacreBLEUScore` ([#546](https://github.com/PyTorchLightning/metrics/pull/546))
- `CharErrorRate` ([#575](https://github.com/PyTorchLightning/metrics/pull/575))
- Added other metrics:
- Tweedie Deviance Score ([#499](https://github.com/PyTorchLightning/metrics/pull/499))
- Learned Perceptual Image Patch Similarity (LPIPS) ([#431](https://github.com/PyTorchLightning/metrics/pull/431))
- Added support for float targets in `nDCG` metric ([#437](https://github.com/PyTorchLightning/metrics/pull/437))


- Added `average` argument to `AveragePrecision` metric for reducing multilabel and multiclass problems ([#477](https://github.com/PyTorchLightning/metrics/pull/477))


- Added Perceptual Evaluation of Speech Quality (PESQ) ([#353](https://github.com/PyTorchLightning/metrics/issues/353))


- Added `average` argument to `AveragePrecision` metric for reducing multi-label and multi-class problems ([#477](https://github.com/PyTorchLightning/metrics/pull/477))
- Added `MultioutputWrapper` ([#510](https://github.com/PyTorchLightning/metrics/pull/510))


- Added metric sweeping `higher_is_better` as constant attribute ([#544](https://github.com/PyTorchLightning/metrics/pull/544))


- Added `SacreBLEUScore` metric to text package ([#546](https://github.com/PyTorchLightning/metrics/pull/546))


- Added simple aggregation metrics: `SumMetric`, `MeanMetric`, `CatMetric`, `MinMetric`, `MaxMetric` ([#506](https://github.com/PyTorchLightning/metrics/pull/506))


- Added pairwise submodule with metrics ([#553](https://github.com/PyTorchLightning/metrics/pull/553))
- `pairwise_cosine_similarity`
- `pairwise_euclidean_distance`
- `pairwise_linear_similarity`
- `pairwise_manhatten_distance`


- Added Short Term Objective Intelligibility (`STOI`) ([#353](https://github.com/PyTorchLightning/metrics/issues/353))


- Added `RetrievalRPrecision` metric to retrieval package ([#577](https://github.com/PyTorchLightning/metrics/pull/577/))


- Added `RetrievalHitRate` metric to retrieval package ([#576](https://github.com/PyTorchLightning/metrics/pull/576))


- Added `CharErrorRate` metric to text package ([#575](https://github.com/PyTorchLightning/metrics/pull/575))


### Changed

- `AveragePrecision` will now as default output the `macro` average for multilabel and multiclass problems ([#477](https://github.com/PyTorchLightning/metrics/pull/477))


- `half`, `double`, `float` will no longer change the dtype of the metric states. Use `metric.set_dtype` instead ([#493](https://github.com/PyTorchLightning/metrics/pull/493))


- Renamed `AverageMeter` to `MeanMetric` ([#506](https://github.com/PyTorchLightning/metrics/pull/506))


- Changed `is_differentiable` from property to a constant attribute ([#551](https://github.com/PyTorchLightning/metrics/pull/551))

### Deprecated
Expand All @@ -77,18 +48,11 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

- Removed `dtype` property ([#493](https://github.com/PyTorchLightning/metrics/pull/493))


### Fixed

- Fixed bug in `F1` with `average='macro'` and `ignore_index!=None` ([#495](https://github.com/PyTorchLightning/metrics/pull/495))


- Fixed bug in `pit` by using the returned first result to initialize device and type ([#533](https://github.com/PyTorchLightning/metrics/pull/533))


- Fixed `SSIM` metric using too much memory ([#539](https://github.com/PyTorchLightning/metrics/pull/539))


- Fixed bug where `device` property was not properly update when metric was a child of a module ([#542](https://github.com/PyTorchLightning/metrics/pull/542))

## [0.5.1] - 2021-08-30
Expand Down
2 changes: 1 addition & 1 deletion tests/classification/test_average_precision.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@
from tests.classification.inputs import _input_multilabel
from tests.helpers import seed_all
from tests.helpers.testers import NUM_CLASSES, MetricTester
from torchmetrics.classification.average_precision import AveragePrecision
from torchmetrics.classification.avg_precision import AveragePrecision
from torchmetrics.functional import average_precision

seed_all(42)
Expand Down
2 changes: 1 addition & 1 deletion torchmetrics/__about__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
__version__ = "0.6.0dev"
__version__ = "0.6.0rc0"
__author__ = "PyTorchLightning et al."
__author_email__ = "[email protected]"
__license__ = "Apache-2.0"
Expand Down
2 changes: 1 addition & 1 deletion torchmetrics/classification/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
from torchmetrics.classification.accuracy import Accuracy # noqa: F401
from torchmetrics.classification.auc import AUC # noqa: F401
from torchmetrics.classification.auroc import AUROC # noqa: F401
from torchmetrics.classification.average_precision import AveragePrecision # noqa: F401
from torchmetrics.classification.avg_precision import AveragePrecision # noqa: F401
from torchmetrics.classification.binned_precision_recall import BinnedAveragePrecision # noqa: F401
from torchmetrics.classification.binned_precision_recall import BinnedPrecisionRecallCurve # noqa: F401
from torchmetrics.classification.binned_precision_recall import BinnedRecallAtFixedPrecision # noqa: F401
Expand Down

0 comments on commit c9c50f7

Please sign in to comment.