-
Notifications
You must be signed in to change notification settings - Fork 411
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unqualified threshold checks contradict logits support #369
Labels
Milestone
Comments
MichaelSpece
added
bug / fix
Something isn't working
help wanted
Extra attention is needed
labels
Jul 13, 2021
Hi! thanks for your contribution!, great first issue! |
Fixed in PR #351, please update to master:
Closing issue. |
Hi @MichaelSpece, thanks for making me aware that the code indeed is not updated everywhere. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
🐛 Bug
Accuracy both functional and module-wise are currently designed to accommodate logits. However, the most natural choice of a threshold (
0
) fails in both cases. More generally, thresholds outside of (0, 1) triggerValueError
. This is documented behavior that contradicts logits support.To Reproduce
Code sample
torchmetrics.functional.classification.accuracy(torch.tensor([0]), torch.tensor([1]), threshold=0)
or
torchmetrics.classification.accuracy.Accuracy(threshold=0)
Expected behavior
Ability to choose arbitrary float thresholds without triggering an exception.
Environment
Bugs are apparent from the source code itself. The threshold checks are unqualified.
Additional context
Predictions are formed from a given
preds
viapreds >= threshold
.Threshold checks were recently removed for F1 (#350, #351) to support logits.
Open to assignment.
The text was updated successfully, but these errors were encountered: