Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BinnedPrecisionRecall not working for multi-dimensional arrays #663

Closed
omerferhatt opened this issue Dec 7, 2021 · 5 comments · Fixed by #1195
Closed

BinnedPrecisionRecall not working for multi-dimensional arrays #663

omerferhatt opened this issue Dec 7, 2021 · 5 comments · Fixed by #1195
Assignees
Labels
bug / fix Something isn't working help wanted Extra attention is needed
Milestone

Comments

@omerferhatt
Copy link

🐛 Bug

Binned Precision Recall Curve not working expected with multi-class and multi-dimensional input/target

To Reproduce

Steps to reproduce the behavior:

  1. Go to 'torchmetrics/classification/binned_precision_recall.py'
  2. See line 172 to 174

Here's stack trace

Exception has occurred: RuntimeError       (note: full exception trace is shown but execution is paused at: _run_module_as_main)

The size of tensor a (5) must match the size of tensor b (224) at non-singleton dimension 2
  File "/opt/anaconda3/envs/alpha-trainer/lib/python3.8/site-packages/torchmetrics/classification/binned_precision_recall.py", line 172, in update
    self.TPs[:, i] += (target & predictions).sum(dim=0)
  File "/opt/anaconda3/envs/alpha-trainer/lib/python3.8/site-packages/torchmetrics/metric.py", line 255, in wrapped_func
    return update(*args, **kwargs)
  File "/opt/anaconda3/envs/alpha-trainer/lib/python3.8/site-packages/torchmetrics/metric.py", line 197, in forward
    self.update(*args, **kwargs)
  File "/opt/anaconda3/envs/alpha-trainer/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/omer/smart-alpha/alpha-trainer/test_metrics.py", line 12, in <module>
    pr, rc, th = prcurve(pred, target)
  File "/opt/anaconda3/envs/alpha-trainer/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/opt/anaconda3/envs/alpha-trainer/lib/python3.8/runpy.py", line 97, in _run_module_code
    _run_code(code, mod_globals, init_globals,
  File "/opt/anaconda3/envs/alpha-trainer/lib/python3.8/runpy.py", line 265, in run_path
    return _run_module_code(code, init_globals, run_name,
  File "/opt/anaconda3/envs/alpha-trainer/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/opt/anaconda3/envs/alpha-trainer/lib/python3.8/runpy.py", line 194, in _run_module_as_main (Current frame)
    return _run_code(code, main_globals, None,

Code sample

import torch
import torchmetrics as tm


target = torch.randint(0, 5, (8, 224, 224))
pred = torch.randn(8, 5, 224, 224)
pred = torch.nn.functional.softmax(pred, dim=1)


prcurve = tm.BinnedPrecisionRecallCurve(num_classes=5, thresholds=3)

pr, rc, th = prcurve(pred, target)

Expected behavior

It should be worked for multi-dimensional data

Environment

  • PyTorch Version (e.g., 1.0): 1.10.0+cu111
  • OS (e.g., Linux): Ubuntu 20.04
  • How you installed PyTorch (conda, pip, source): pip
  • Build command you used (if compiling from source):
  • Python version: 3.8.12
  • CUDA/cuDNN version: CUDA 11.1
  • GPU models and configuration: RTX 2080 Super
  • Any other relevant information:
@omerferhatt omerferhatt added bug / fix Something isn't working help wanted Extra attention is needed labels Dec 7, 2021
@github-actions
Copy link

github-actions bot commented Dec 7, 2021

Hi! thanks for your contribution!, great first issue!

@omerferhatt
Copy link
Author

I think, since it's working with single value comparison we can just flatten n-dimensional array (N, C, ...) to (N, C, F) which F is flattened axis.

@Borda
Copy link
Member

Borda commented Dec 8, 2021

@omerferhatt mind sending a PR so we can check your suggestion? :]

@Borda
Copy link
Member

Borda commented Jan 19, 2022

@omerferhatt how is it going? would be nice to have it in the next bugfix release... 🐰

@Borda Borda added this to the v0.8 milestone Feb 11, 2022
@Borda Borda modified the milestones: v0.8, v0.9 Mar 22, 2022
@SkafteNicki SkafteNicki modified the milestones: v0.9, v0.10 May 12, 2022
@SkafteNicki
Copy link
Member

Issue will be fixed by classification refactor: see this issue #1001 and this PR #1195 for all changes

Small recap: This issue describes that multi-dimensional tensors are not supported in BinnedPrecisionRecall, which is correct. In the refactor BinnedPrecisionRecall will be completely deprecated and instead *PrecisionRecall will now support both binned and non-binned calculations, where * is either Binary, Multiclass or Multilabel. This metric supports multi-dimensional tensors out of the box which will solve this issue.

import torch
# provided example is for multiclass problems
from torchmetrics.classification import MulticlassPrecisionRecallCurve

target = torch.randint(0, 5, (8, 224, 224))
pred = torch.randn(8, 5, 224, 224).softmax(dim=1)

# using the thresholds argument will choose to use a binning approach for calculating the metric
prcurve = MulticlassPrecisionRecallCurve(num_classes=5, thresholds=3)
prcurve(pred, target)
# (tensor([[0.1996, 0.1996, 0.0000, 1.0000],
#         [0.2009, 0.1984, 0.0000, 1.0000],
#         [0.2005, 0.2021, 0.0000, 1.0000],
#         [0.2002, 0.2000, 0.0000, 1.0000],
#         [0.1988, 0.1961, 0.0000, 1.0000]]),
#  tensor([[1.0000, 0.0663, 0.0000, 0.0000],
#         [1.0000, 0.0652, 0.0000, 0.0000],
#         [1.0000, 0.0667, 0.0000, 0.0000],
#         [1.0000, 0.0655, 0.0000, 0.0000],
#         [1.0000, 0.0659, 0.0000, 0.0000]]),
#   tensor([0.0000, 0.5000, 1.0000]))

Issue will be closed when #1195 is merged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug / fix Something isn't working help wanted Extra attention is needed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants