You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Presently, when using Accuracy metric on multi-class with scores (N,C entry in input types), the scores are required to be probabilities in [0, 1].
However, un-thresholded accuracy can be computed without normalized probabilities as inputs, as relative ordering of scores is all that is needed.
Given that some uses of Accuracy do require normalized probabilities, we could implement this as a flag that would disable the input check.
Motivation
It is common to work with unnormalized class scores during training, especially during classification tasks, as they are used in the more-stable nn.CrossEntropyLoss. Rather than having to additionally compute a softmax just for the accuracy metric, it would be reasonable to allow usage of arbitrarily scaled input data.
I specify Accuracy because it is the use case that I ran into, but it's possible other Metrics have the same property.
Pitch
Add a flag to Accuracy (and any other applicable metrics) that disables the input range check for preds.
Alternatives
The present workaround is to apply a softmax before feeding data to your Accuracy metric.
Just my opinion but I feel like if this is implemented, it should come with a warning the first time it happens. Since sometimes you (I definitely) would like the metric to be calculated with the current behavior.
🚀 Feature
Presently, when using
Accuracy
metric on multi-class with scores (N,C
entry in input types), the scores are required to be probabilities in [0, 1].However, un-thresholded accuracy can be computed without normalized probabilities as inputs, as relative ordering of scores is all that is needed.
Given that some uses of
Accuracy
do require normalized probabilities, we could implement this as a flag that would disable the input check.Motivation
It is common to work with unnormalized class scores during training, especially during classification tasks, as they are used in the more-stable
nn.CrossEntropyLoss
. Rather than having to additionally compute a softmax just for the accuracy metric, it would be reasonable to allow usage of arbitrarily scaled input data.I specify
Accuracy
because it is the use case that I ran into, but it's possible other Metrics have the same property.Pitch
Add a flag to
Accuracy
(and any other applicable metrics) that disables the input range check forpreds
.Alternatives
The present workaround is to apply a softmax before feeding data to your Accuracy metric.
Additional context
https://github.com/PyTorchLightning/pytorch-lightning/blob/0456b4598f5f7eaebf626bca45d563562a15887b/pytorch_lightning/metrics/functional/accuracy.py#L25
The text was updated successfully, but these errors were encountered: