Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PearsonCorrcoef breaks when a single sample is passed at a time #274

Closed
raimis opened this issue Jun 7, 2021 · 10 comments
Closed

PearsonCorrcoef breaks when a single sample is passed at a time #274

raimis opened this issue Jun 7, 2021 · 10 comments
Assignees
Labels
bug / fix Something isn't working help wanted Extra attention is needed

Comments

@raimis
Copy link

raimis commented Jun 7, 2021

🐛 Bug

When passing a single sample to the PearsonCorrcoef metric, it will crash. Similar to #227 and partially fixed by #229. Originally reported in #227 (comment)

To Reproduce

import torch
import torchmetrics

print(torch.__version__)
print(torchmetrics.__version__)

correlation = torchmetrics.PearsonCorrcoef()

correlation.update(torch.tensor([0.3]), torch.tensor([0.4]))
correlation.update(torch.tensor([0.7]), torch.tensor([0.5]))
correlation.update(torch.tensor([0.9]), torch.tensor([0.4]))
correlation.compute()

Output:

1.8.0
0.3.2
/shared/raimis/opt/miniconda/envs/tmp/lib/python3.8/site-packages/torchmetrics/utilities/prints.py:36: UserWarning: Metric `PearsonCorrcoef` will save all targets and predictions in buffer. For large datasets this may lead to large memory footprint.
  warnings.warn(*args, **kwargs)
Traceback (most recent call last):
  File "bug.py", line 12, in <module>
    correlation.compute()
  File "/shared/raimis/opt/miniconda/envs/tmp/lib/python3.8/site-packages/torchmetrics/metric.py", line 251, in wrapped_func
    self._computed = compute(*args, **kwargs)
  File "/shared/raimis/opt/miniconda/envs/tmp/lib/python3.8/site-packages/torchmetrics/regression/pearson.py", line 95, in compute
    preds = torch.cat(self.preds, dim=0)
RuntimeError: zero-dimensional tensor (at position 0) cannot be concatenated

Expected behavior

The metric is computed without throwing the RuntimeError.

@raimis raimis added bug / fix Something isn't working help wanted Extra attention is needed labels Jun 7, 2021
@github-actions
Copy link

github-actions bot commented Jun 7, 2021

Hi! thanks for your contribution!, great first issue!

@edgarriba
Copy link

@raimis please try to update torchmetrics>0.3.2 - in master this issue is fixed.

@raimis
Copy link
Author

raimis commented Jun 9, 2021

Thanks @edgarriba. Do you know when the next version will be released?

@edgarriba
Copy link

I guess at some point soon since last release was a month ago. Any insights here @Borda @edenlightning ?

@Borda
Copy link
Member

Borda commented Jun 9, 2021

Do you know when the next version will be released?

do you need anything specific from master?

@raimis
Copy link
Author

raimis commented Jun 9, 2021

do you need anything specific from master?

Just the fix of this issue.

@Borda
Copy link
Member

Borda commented Jun 9, 2021

@raimis
Copy link
Author

raimis commented Jun 10, 2021

I tried again:

  • Install torchmetrics with conda:
    $ conda create -n tmp_1 -c conda-forge torchmetrics pytorch-gpu
  • The version of torchmetrics is 0.3.2, as expected:
    $ conda activate tmp_1
    $ conda list | grep torchmetrics
    torchmetrics              0.3.2              pyhd8ed1ab_0    conda-forge
  • Run the code above and get the same error:
    $ python
    Python 3.9.4 | packaged by conda-forge | (default, May 10 2021, 22:13:33) 
    [GCC 9.3.0] on linux
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import torch
    >>> import torchmetrics
    >>> print(torch.__version__)
    1.8.0
    >>> print(torchmetrics.__version__)
    0.3.2
    >>> correlation = torchmetrics.PearsonCorrcoef()
    /shared/raimis/opt/miniconda/envs/tmp_1/lib/python3.9/site-packages/torchmetrics/utilities/prints.py:36: UserWarning: Metric `PearsonCorrcoef` will save all targets and predictions in buffer. For large datasets this may lead to large memory footprint.
    warnings.warn(*args, **kwargs)
    >>> correlation.update(torch.tensor([0.3]), torch.tensor([0.4]))
    >>> correlation.update(torch.tensor([0.7]), torch.tensor([0.5]))
    >>> correlation.update(torch.tensor([0.9]), torch.tensor([0.4]))
    >>> correlation.compute()
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/shared/raimis/opt/miniconda/envs/tmp_1/lib/python3.9/site-packages/torchmetrics/metric.py", line 251, in wrapped_func
        self._computed = compute(*args, **kwargs)
      File "/shared/raimis/opt/miniconda/envs/tmp_1/lib/python3.9/site-packages/torchmetrics/regression/pearson.py", line 95, in compute
        preds = torch.cat(self.preds, dim=0)
    RuntimeError: zero-dimensional tensor (at position 0) cannot be concatenated
    >>> 

So, the issues is not fixed in 0.3.2.

@Borda Borda reopened this Jun 10, 2021
@SkafteNicki
Copy link
Member

Fix included in v0.4.0 just released (https://github.com/PyTorchLightning/metrics/releases/tag/v0.4.0). Closing.

@raimis
Copy link
Author

raimis commented Jun 29, 2021

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug / fix Something isn't working help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

4 participants