-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Classification metrics overhaul: precision & recall (4/n) #4842
Classification metrics overhaul: precision & recall (4/n) #4842
Conversation
Hello @tadejsv! Thanks for updating this PR. There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻 Comment last updated at 2021-01-17 18:02:19 UTC |
…orch-lightning into cls_metrics_precision_recall
…ics_precision_recall
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM :]
@@ -120,55 +87,6 @@ def test_get_num_classes(pred, target, num_classes, expected_num_classes): | |||
assert get_num_classes(pred, target, num_classes) == expected_num_classes | |||
|
|||
|
|||
@pytest.mark.parametrize(['pred', 'target', 'expected_tp', 'expected_fp', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why did we remove those tests ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am removing tests for deprecated functions. This was also done in other PRs before, see #4704
…orch-lightning into cls_metrics_precision_recall
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. Just some comments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
some minor comments, but otherwise it LGTM ;]
Co-authored-by: Jirka Borovec <[email protected]>
…ics_precision_recall
This PR is a spin-off from #4835.
What does this PR do?
Recall, Precision
These are all metrics that can be represented as a (quotient) function of "stat scores" - thanks to subclassing
StatScores
their code is extremely simple. Here are the parameters common to all of them:average
: this builds on thereduce
parameter in StatScores. The options here (micro
,macro
,weighted
,none
or None,samples
) are exactly equivalent to the sklearn counterparts, so I won't go into details.mdmc_average
: builds on themdmc_reduce
from StatScores. This decides how to average scores for multi-dimensional multi-class inputs. Already discussed inmdmc_reduce
.Both also get the
top_k
parameter, enabling their use as Recall@K and Precision@K - very useful for information retrieval.Deprecations
I have deprecated
precision_recall
metric, as well as the oldprecision
andrecall
(in case someone was importing them using the full path, otherwise they are replaced by the newprecision
andrecall
)