Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add AverageMeter implementation #138

Merged
merged 22 commits into from
Apr 14, 2021
Merged

Add AverageMeter implementation #138

merged 22 commits into from
Apr 14, 2021

Conversation

alanhdu
Copy link
Contributor

@alanhdu alanhdu commented Mar 26, 2021

This adds an AverageMeter, which is a simple metric that takes the average of a stream of values.

Closes #129

@pep8speaks
Copy link

pep8speaks commented Mar 26, 2021

Hello @alanhdu! Thanks for updating this PR.

There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻

Comment last updated at 2021-04-14 18:29:03 UTC

@codecov
Copy link

codecov bot commented Mar 26, 2021

Codecov Report

Merging #138 (4bfc246) into master (ec239e0) will increase coverage by 0.13%.
The diff coverage is 96.90%.

❗ Current head 4bfc246 differs from pull request most recent head c205ffd. Consider uploading reports for the commit c205ffd to get more accurate results
Impacted file tree graph

@@            Coverage Diff             @@
##           master     #138      +/-   ##
==========================================
+ Coverage   95.99%   96.12%   +0.13%     
==========================================
  Files         168       90      -78     
  Lines        5144     2790    -2354     
==========================================
- Hits         4938     2682    -2256     
+ Misses        206      108      -98     
Flag Coverage Δ
Linux 79.78% <91.85%> (+1.05%) ⬆️
Windows 79.78% <91.85%> (+1.05%) ⬆️
cpu 96.12% <96.90%> (+0.13%) ⬆️
gpu ?
macOS 96.12% <96.90%> (+0.13%) ⬆️
pytest 96.12% <96.90%> (+0.13%) ⬆️
python3.6 ?
python3.8 96.12% <96.90%> (+0.13%) ⬆️
python3.9 96.02% <95.57%> (+0.02%) ⬆️
torch1.3.1 ?
torch1.4.0 95.16% <96.46%> (+0.17%) ⬆️
torch1.8.1 96.02% <95.57%> (+0.02%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
torchmetrics/metric.py 94.75% <66.66%> (-0.32%) ⬇️
torchmetrics/utilities/imports.py 82.14% <75.00%> (+2.14%) ⬆️
torchmetrics/functional/regression/spearman.py 93.02% <93.02%> (ø)
torchmetrics/average.py 95.83% <95.83%> (ø)
torchmetrics/functional/regression/pearson.py 96.00% <96.00%> (ø)
torchmetrics/__about__.py 100.00% <100.00%> (ø)
torchmetrics/__init__.py 100.00% <100.00%> (ø)
torchmetrics/classification/__init__.py 100.00% <100.00%> (ø)
torchmetrics/classification/accuracy.py 100.00% <100.00%> (ø)
...hmetrics/classification/binned_precision_recall.py 100.00% <100.00%> (ø)
... and 100 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update ec239e0...c205ffd. Read the comment docs.

@alanhdu
Copy link
Contributor Author

alanhdu commented Mar 26, 2021

I am currently struggling a bit with the test suite -- many of them seem to have hardcoded an interface where metrics operate over predictions and target, which isn't applicable here. I will continue to study the tests and see how to add it.

I was also not sure of the code organization -- I decided to just create a new submodule since it doesn't seem to fit any of the existing module's super well, but I'm totally open to guidance on where this should live.

@SkafteNicki SkafteNicki added the enhancement New feature or request label Mar 28, 2021
@Borda
Copy link
Member

Borda commented Mar 28, 2021

I am currently struggling a bit with the test suite

ping us in slack if you need our help :]

Copy link
Member

@SkafteNicki SkafteNicki left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For testing, I think it should be quite easily to check that this works as intended by using our current testing interface and just compare against np.average.

Also missing:

  • add instance to changelog
  • add ref in the docs/source/references/modules.rst

torchmetrics/average.py Outdated Show resolved Hide resolved
torchmetrics/average.py Outdated Show resolved Hide resolved
torchmetrics/average.py Show resolved Hide resolved
torchmetrics/average.py Outdated Show resolved Hide resolved
torchmetrics/average.py Outdated Show resolved Hide resolved
torchmetrics/average.py Outdated Show resolved Hide resolved
torchmetrics/average.py Show resolved Hide resolved
torchmetrics/average.py Show resolved Hide resolved
@alanhdu
Copy link
Contributor Author

alanhdu commented Mar 31, 2021

Update: I'm quite busy at work this week with other things, but I will try to take a look at this next week.

@alanhdu
Copy link
Contributor Author

alanhdu commented Apr 5, 2021

Find some time to come back to this. I made a couple different changes:

  • I believe I responded to all the style requests
  • I changed the interface slightly to allow AverageMeter to take in Tensor values/weights along with scalar ones. If it has a tensor value, then we treat each item of the Tensor as a separate observation. Weights is automatically broadcasted to fit the values
  • I have added tests, although I had to abuse some of the name slightly.

tests/test_average.py Outdated Show resolved Hide resolved
torchmetrics/average.py Outdated Show resolved Hide resolved
@alanhdu
Copy link
Contributor Author

alanhdu commented Apr 5, 2021

Intersting... it looks like torch.broadcast_to was only added in PyTorch 1.8. Should I add a version check here, or just revert that behavior (and require weights to either be a scalar or the same shape as values)?

torchmetrics/average.py Outdated Show resolved Hide resolved
torchmetrics/average.py Outdated Show resolved Hide resolved
tests/test_average.py Outdated Show resolved Hide resolved
@alanhdu
Copy link
Contributor Author

alanhdu commented Apr 12, 2021

I believe I've responded to the code review comments. The tests for the scalar case are a little messy, but I think they should be workable.

@alanhdu alanhdu marked this pull request as ready for review April 12, 2021 20:12
Copy link
Member

@SkafteNicki SkafteNicki left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR is looking good, if you could look into failing test that would be great :]

It seems something is wrong with the type checking:
Cannot resolve forward reference in type annotations of "torchmetrics.AverageMeter.update": name 'Union' is not defined

CHANGELOG.md Outdated Show resolved Hide resolved
@Borda
Copy link
Member

Borda commented Apr 13, 2021

@alanhdu @SkafteNicki how are we going here? 🐰

@Borda Borda enabled auto-merge (squash) April 14, 2021 12:50
Comment on lines 83 to 84
value: "typing.Union[Tensor, float]",
weight: "typing.Union[Tensor, float]" = 1.0
Copy link
Member

@Borda Borda Apr 14, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not very happy about this typing, can we get some better way?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it is changed to just Union[Tensor, float] now, is that better?
else we can just go with Any (which is not completely correct, but should not result in the pickle problem)

auto-merge was automatically disabled April 14, 2021 16:14

Head branch was pushed to by a user without write access

SkafteNicki and others added 9 commits April 14, 2021 20:28
* init files

* rest

* pep8

* changelog

* clamp

* suggestions

* rename

* format

* _sk_pearsonr

* inline

* fix sync

* fix tests

* fix docs

* Apply suggestions from code review

* Update torchmetrics/functional/regression/pearson.py

* atol

* update

* pep8

* pep8

* chlog

* .

Co-authored-by: Jirka Borovec <[email protected]>
Co-authored-by: Jirka Borovec <[email protected]>
* ranking

* init files

* update

* nearly working

* fix tests

* pep8

* add docs

* fix doctests

* fix docs

* pep8

* isort

* ghlog

* Apply suggestions from code review

Co-authored-by: Jirka Borovec <[email protected]>
* added test changes

* fix style error

* fixed typo

* added changes for requires_grad

* metrics differentiability testing generalization

* Update tests/classification/test_accuracy.py

Co-authored-by: Nicki Skafte <[email protected]>

* fix tests

* pep8

* changelog

* fix docs

* fix tests

* pep8

* Apply suggestions from code review

Co-authored-by: Nicki Skafte <[email protected]>
Co-authored-by: Jirka Borovec <[email protected]>
* WIP: Binned PR-related metrics

* attempt to fix types

* switch to linspace to make old pytorch happy

* make flake happy

* clean up

* Add more testing, move test input generation to the approproate place

* bugfixes and more stable and thorough tests

* flake8

* Reuse python zip-based implementation as it can't be reproduced with torch.where/max

* address comments

* isort

* Add docs and doctests, make APIs same as non-binned versions

* pep8

* isort

* doctests likes longer title underlines :O

* use numpy's nan_to_num

* add atol to bleu tests to make them more stable

* atol=1e-2 for bleu

* add more docs

* pep8

* remove nlp test hack

* address comments

* pep8

* abc

* flake8

* remove typecheck

* chlog

Co-authored-by: Jirka Borovec <[email protected]>
Co-authored-by: Nicki Skafte <[email protected]>
Co-authored-by: Jirka Borovec <[email protected]>
* version + about

* flake8

* try

* .
@Borda Borda enabled auto-merge (squash) April 14, 2021 18:29
@Borda Borda merged commit da00174 into Lightning-AI:master Apr 14, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request ready
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Include AverageMeter?
7 participants