-
Notifications
You must be signed in to change notification settings - Fork 411
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add AverageMeter implementation #138
Conversation
Hello @alanhdu! Thanks for updating this PR. There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻 Comment last updated at 2021-04-14 18:29:03 UTC |
Codecov Report
@@ Coverage Diff @@
## master #138 +/- ##
==========================================
+ Coverage 95.99% 96.12% +0.13%
==========================================
Files 168 90 -78
Lines 5144 2790 -2354
==========================================
- Hits 4938 2682 -2256
+ Misses 206 108 -98
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
I am currently struggling a bit with the test suite -- many of them seem to have hardcoded an interface where metrics operate over I was also not sure of the code organization -- I decided to just create a new submodule since it doesn't seem to fit any of the existing module's super well, but I'm totally open to guidance on where this should live. |
ping us in slack if you need our help :] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For testing, I think it should be quite easily to check that this works as intended by using our current testing interface and just compare against np.average
.
Also missing:
- add instance to changelog
- add ref in the
docs/source/references/modules.rst
Update: I'm quite busy at work this week with other things, but I will try to take a look at this next week. |
Find some time to come back to this. I made a couple different changes:
|
Intersting... it looks like |
I believe I've responded to the code review comments. The tests for the scalar case are a little messy, but I think they should be workable. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PR is looking good, if you could look into failing test that would be great :]
It seems something is wrong with the type checking:
Cannot resolve forward reference in type annotations of "torchmetrics.AverageMeter.update": name 'Union' is not defined
@alanhdu @SkafteNicki how are we going here? 🐰 |
torchmetrics/average.py
Outdated
value: "typing.Union[Tensor, float]", | ||
weight: "typing.Union[Tensor, float]" = 1.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not very happy about this typing, can we get some better way?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it is changed to just Union[Tensor, float]
now, is that better?
else we can just go with Any
(which is not completely correct, but should not result in the pickle problem)
Head branch was pushed to by a user without write access
* init files * rest * pep8 * changelog * clamp * suggestions * rename * format * _sk_pearsonr * inline * fix sync * fix tests * fix docs * Apply suggestions from code review * Update torchmetrics/functional/regression/pearson.py * atol * update * pep8 * pep8 * chlog * . Co-authored-by: Jirka Borovec <[email protected]> Co-authored-by: Jirka Borovec <[email protected]>
* ranking * init files * update * nearly working * fix tests * pep8 * add docs * fix doctests * fix docs * pep8 * isort * ghlog * Apply suggestions from code review Co-authored-by: Jirka Borovec <[email protected]>
* added test changes * fix style error * fixed typo * added changes for requires_grad * metrics differentiability testing generalization * Update tests/classification/test_accuracy.py Co-authored-by: Nicki Skafte <[email protected]> * fix tests * pep8 * changelog * fix docs * fix tests * pep8 * Apply suggestions from code review Co-authored-by: Nicki Skafte <[email protected]> Co-authored-by: Jirka Borovec <[email protected]>
* WIP: Binned PR-related metrics * attempt to fix types * switch to linspace to make old pytorch happy * make flake happy * clean up * Add more testing, move test input generation to the approproate place * bugfixes and more stable and thorough tests * flake8 * Reuse python zip-based implementation as it can't be reproduced with torch.where/max * address comments * isort * Add docs and doctests, make APIs same as non-binned versions * pep8 * isort * doctests likes longer title underlines :O * use numpy's nan_to_num * add atol to bleu tests to make them more stable * atol=1e-2 for bleu * add more docs * pep8 * remove nlp test hack * address comments * pep8 * abc * flake8 * remove typecheck * chlog Co-authored-by: Jirka Borovec <[email protected]> Co-authored-by: Nicki Skafte <[email protected]> Co-authored-by: Jirka Borovec <[email protected]>
* version + about * flake8 * try * .
This adds an
AverageMeter
, which is a simple metric that takes the average of a stream of values.Closes #129