You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'd like to propose to update class metrics interface of Precision/Recall/Fbeta to have the average argument include none and weighted as in the corresponding functional metrics interface.
Motivation
Current interface with average argument restricts to macro and micro, and because of that one could not use class metrics interface to calculate precision/recall/fbeta for an individual class. For example, in binary classification, one is typically interested in getting metrics results for positive class (class 1) and this cannot be done with the current class interface. Therefore one has go back to the functional metric and this could defeat the purpose of having class metrics (to take care of ddp sync).
Update class metrics interface of Precision/Recall/Fbeta to have the average argument include none and weighted as in the corresponding functional metrics interface.
Alternatives
One can always fall back to the functional metric but I assume this is not what we would like.
Additional context
Really like the new class interface to work with DDP and appreciate all your work!
The text was updated successfully, but these errors were encountered:
The Fbeta metric will get that feature when this PR is merged: Lightning-AI/pytorch-lightning#4656
When that is merged I plan on dealing with precision and recall
It is computing the one-hot vector, which means to convert [0, 1] to [[1, 0], [0, 1]]. While in a binary case, it would put both negative and positive values into account (as true_positives as written in the code). So, I would say it will produce the wrong results for binary classification, right?
This issue somehow reported in Lightning-AI/pytorch-lightning#3035, but the reduction='none' is remaining unimplemented for the Precision and Recall classes.
I think an easy implementation of it is to have another param to set pos_labels=[0], which can receive n labels as well. I think it can be pretty helpful for both binary and multi-class classification.
Borda
transferred this issue from Lightning-AI/pytorch-lightning
Mar 12, 2021
🚀 Feature
I'd like to propose to update class metrics interface of
Precision/Recall/Fbeta
to have theaverage
argument includenone
andweighted
as in the corresponding functional metrics interface.Motivation
Current interface with
average
argument restricts tomacro
andmicro
, and because of that one could not use class metrics interface to calculate precision/recall/fbeta for an individual class. For example, in binary classification, one is typically interested in getting metrics results for positive class (class 1) and this cannot be done with the current class interface. Therefore one has go back to the functional metric and this could defeat the purpose of having class metrics (to take care of ddp sync).On the contrary, sklearn defaults to calculate precision/recall/fbeta for the individual class (class 1) while giving one option to calculate micro/macro/weighted average of these scores.
https://scikit-learn.org/stable/modules/generated/sklearn.metrics.recall_score.html
Pitch
Update class metrics interface of Precision/Recall/Fbeta to have the
average
argument includenone
andweighted
as in the corresponding functional metrics interface.Alternatives
One can always fall back to the functional metric but I assume this is not what we would like.
Additional context
Really like the new class interface to work with DDP and appreciate all your work!
The text was updated successfully, but these errors were encountered: