-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Metrics] Unification of regression #4166
[Metrics] Unification of regression #4166
Conversation
Hello @SkafteNicki! Thanks for updating this PR. There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻 Comment last updated at 2020-10-21 20:35:36 UTC |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really like what you did here. I added some suggestions on typing, but the code itself is fine.
My only concern is about tests...
Co-authored-by: Justus Schock <[email protected]>
Co-authored-by: Justus Schock <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Excellent! This should be the standard for metrics moving forward :)
…lightning into metrics/unification_regression
Can probably change the rest of the regression metrics to have |
Also the plan would be to do this with the classification metrics too right? |
Codecov Report
@@ Coverage Diff @@
## master #4166 +/- ##
=======================================
+ Coverage 90% 93% +3%
=======================================
Files 103 109 +6
Lines 7842 7912 +70
=======================================
+ Hits 7053 7349 +296
+ Misses 789 563 -226 |
Currently SSIM is failing tests. When it was originally added it was only passing due to a tolerance of 1e-4 which is still failing now with the new more rigorous tests. @ydcjeff do you think you could take a look? |
Sure, I will take a look. |
@teddykoker agree, that we should do the same unification for all the classification metrics. There are just more of them so I started the easy once 😄 |
From the code so far, I can approve this PR (except for failing test of course :D) |
@justusschock @SkafteNicki All of the regression metrics are unified now; only issue is that the structural similarity tests are failing. Do you think we should just comment them out and they can be fixed in a new PR? would be great if we could get this merged ASAP so we can start doing the same with the classification metrics |
The results for SSIM are matching up only to 4 or 5 decimal points, I am not sure how to match up to 6 or 7 decimal points |
@ydcjeff I increased the tolerance for |
process_group=process_group, | ||
) | ||
rank_zero_warn( | ||
'Metric `SSIM` will save all targets and' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can use 120-length lines
return peak_signal_noise_ratio(sk_target, sk_preds, data_range=data_range) | ||
|
||
|
||
def _base_e_sk_metric(preds, target, data_range): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is the _base_E_sk_metric
for?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that is for testing when the base
parameter in our own implementation is different from log-base-10 (in this case log-e, i.e. the natural logarithm)
What does this PR do?
Attempt at unification between the new class based regression metrics and their functional counterpart.
In short, each functional now consist of three functions:
where the two first we import into the corresponding class based metric and use in the respective
update
andcompute
method. The last is what the user interacts with if they use the class based interface.I bit of renaming is also happening (again for consistency) and removal of old redundant tests.
Tagging @ananyahjha93 and @teddykoker for opinion.
Before submitting
PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
Did you have fun?
Make sure you had fun coding 🙃