Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Metrics] Unification of regression #4166

Merged

Conversation

SkafteNicki
Copy link
Member

What does this PR do?

Attempt at unification between the new class based regression metrics and their functional counterpart.
In short, each functional now consist of three functions:

def _metric_name_update(...)
def _metric_name_compute(...)
def mean_name(...):

where the two first we import into the corresponding class based metric and use in the respective update and compute method. The last is what the user interacts with if they use the class based interface.
I bit of renaming is also happening (again for consistency) and removal of old redundant tests.
Tagging @ananyahjha93 and @teddykoker for opinion.

Before submitting

  • Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure your PR does only one thing, instead of bundling different changes together? Otherwise, we ask you to create a separate PR for every change.
  • Did you make sure to update the documentation with your changes?
  • Did you write any new necessary tests?
  • Did you verify new and existing tests pass locally with your changes?
  • If you made a notable change (that affects users), did you update the CHANGELOG?

PR review

Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

Did you have fun?

Make sure you had fun coding 🙃

@pep8speaks
Copy link

pep8speaks commented Oct 15, 2020

Hello @SkafteNicki! Thanks for updating this PR.

There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻

Comment last updated at 2020-10-21 20:35:36 UTC

@mergify mergify bot requested a review from a team October 15, 2020 08:52
Copy link
Member

@justusschock justusschock left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really like what you did here. I added some suggestions on typing, but the code itself is fine.

My only concern is about tests...

pytorch_lightning/metrics/functional/explained_variance.py Outdated Show resolved Hide resolved
pytorch_lightning/metrics/functional/explained_variance.py Outdated Show resolved Hide resolved
pytorch_lightning/metrics/functional/explained_variance.py Outdated Show resolved Hide resolved
tests/metrics/functional/test_regression.py Show resolved Hide resolved
@mergify mergify bot requested a review from a team October 15, 2020 09:38
Copy link
Contributor

@teddykoker teddykoker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Excellent! This should be the standard for metrics moving forward :)

@teddykoker
Copy link
Contributor

Can probably change the rest of the regression metrics to have _compute and _update function. (even if the update function does nothing for now)

@teddykoker
Copy link
Contributor

Also the plan would be to do this with the classification metrics too right?

@teddykoker teddykoker changed the title [Metrics] Unification of regression [Metrics] Unification of regression [WIP] Oct 20, 2020
@codecov
Copy link

codecov bot commented Oct 20, 2020

Codecov Report

Merging #4166 into master will increase coverage by 3%.
The diff coverage is 100%.

@@           Coverage Diff           @@
##           master   #4166    +/-   ##
=======================================
+ Coverage      90%     93%    +3%     
=======================================
  Files         103     109     +6     
  Lines        7842    7912    +70     
=======================================
+ Hits         7053    7349   +296     
+ Misses        789     563   -226     

@Borda Borda self-requested a review October 20, 2020 10:31
@edenlightning edenlightning modified the milestones: 1.0.3, 1.1 Oct 20, 2020
@teddykoker
Copy link
Contributor

Currently SSIM is failing tests. When it was originally added it was only passing due to a tolerance of 1e-4 which is still failing now with the new more rigorous tests. @ydcjeff do you think you could take a look?

@ydcjeff
Copy link
Contributor

ydcjeff commented Oct 21, 2020

Sure, I will take a look.

@SkafteNicki
Copy link
Member Author

@teddykoker agree, that we should do the same unification for all the classification metrics. There are just more of them so I started the easy once 😄

@mergify mergify bot requested a review from a team October 21, 2020 09:11
@justusschock
Copy link
Member

From the code so far, I can approve this PR (except for failing test of course :D)

@teddykoker
Copy link
Contributor

@justusschock @SkafteNicki All of the regression metrics are unified now; only issue is that the structural similarity tests are failing. Do you think we should just comment them out and they can be fixed in a new PR? would be great if we could get this merged ASAP so we can start doing the same with the classification metrics

@ydcjeff
Copy link
Contributor

ydcjeff commented Oct 21, 2020

The results for SSIM are matching up only to 4 or 5 decimal points, I am not sure how to match up to 6 or 7 decimal points

@teddykoker
Copy link
Contributor

@ydcjeff I increased the tolerance for ssim tests to 1e-3, should pass all tests now. Would be great if we could lower the tolerance in the future, there seems to be a few differences between the skimage and our implementation

@teddykoker teddykoker changed the title [Metrics] Unification of regression [WIP] [Metrics] Unification of regression Oct 21, 2020
@ananyahjha93 ananyahjha93 merged commit a937394 into Lightning-AI:master Oct 21, 2020
process_group=process_group,
)
rank_zero_warn(
'Metric `SSIM` will save all targets and'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can use 120-length lines

@SkafteNicki SkafteNicki deleted the metrics/unification_regression branch October 22, 2020 08:09
return peak_signal_noise_ratio(sk_target, sk_preds, data_range=data_range)


def _base_e_sk_metric(preds, target, data_range):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is the _base_E_sk_metric for?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that is for testing when the base parameter in our own implementation is different from log-base-10 (in this case log-e, i.e. the natural logarithm)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature Is an improvement or enhancement
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants