-
Notifications
You must be signed in to change notification settings - Fork 411
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Wrong numbers for multiclass acc/prec/f1 if some class is not listed in true/pred tensors #295
Comments
Hi! thanks for your contribution!, great first issue! |
Hi @Borda I can have a look at this if someone is already not involved here. |
That would be great, thank you! |
Hi @vatch123 weights = torch.where(zero_div_mask | ignore_mask, tensor(0.0, device=weights.device), weights) |
Hi. Thanks for this. Let me have a look at this. |
Any update on this? |
@celsofranssa we have a open PR #303 that should fix it. |
🐛 Bug
TM's output for precision, accuracy, f1 differs from sklearn' version of same metrics with average='macro' setting used and labels in pred/true tensors for all classes except one
To Reproduce
Steps to reproduce the behavior:
Code sample
Expected behavior
They're should be 1.0, similar to sklearn version
But adding even single item with id=2 makes numbers right, so its definitely a bug when label is exist but no examples for it in pred and true tensors
Environment
- GPU:
- NVIDIA Quadro RTX 5000
- available: True
- version: 11.0
- numpy: 1.20.3
- pyTorch_debug: True
- pyTorch_version: 1.7.0+cu110
- pytorch-lightning: 1.3.4
- torchmetrics: 0.3.2
- tqdm: 4.51.0
- OS: Linux
- architecture:
- 64bit
-
- processor: x86_64
- python: 3.7.10
- version: Add metric code from lightning #1 SMP Sun Feb 14 18:10:38 EST 2021
The text was updated successfully, but these errors were encountered: