Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Metrics] Class reduction similar to sklearn #3322

Merged
merged 30 commits into from
Sep 15, 2020
Merged
Show file tree
Hide file tree
Changes from 13 commits
Commits
Show all changes
30 commits
Select commit Hold shift + click to select a range
8fdce22
new class reduce interface
SkafteNicki Sep 1, 2020
ceb46b5
update docs
Sep 2, 2020
590e9bb
pep8
Sep 2, 2020
a074787
update_class_metrics
Sep 2, 2020
7a3b1c9
fix doctest
SkafteNicki Sep 3, 2020
3fa1a01
merge
SkafteNicki Sep 3, 2020
cc1ca72
changelog
SkafteNicki Sep 3, 2020
08eff55
fix docs
SkafteNicki Sep 4, 2020
6658e44
fix codefactor
SkafteNicki Sep 4, 2020
5846181
fix codefactor
SkafteNicki Sep 4, 2020
16a0086
formatting
Borda Sep 4, 2020
8052c1f
fix typo
SkafteNicki Sep 5, 2020
514a1d0
Merge branch 'metrics/new_class_reduce' of https://github.com/SkafteN…
SkafteNicki Sep 5, 2020
62f6fa8
fix typo
SkafteNicki Sep 7, 2020
a74abc0
typo pr -> per
awaelchli Sep 7, 2020
0c8fab3
update from suggestion
SkafteNicki Sep 8, 2020
8785e06
Merge branch 'metrics/new_class_reduce' of https://github.com/SkafteN…
SkafteNicki Sep 8, 2020
cad5a34
fix error
SkafteNicki Sep 8, 2020
dcd6911
Apply suggestions from code review
Borda Sep 8, 2020
1d3a9b3
Update CHANGELOG.md
Borda Sep 11, 2020
0070246
formatting
Borda Sep 11, 2020
3055da8
Merge branch 'master' into metrics/new_class_reduce
SkafteNicki Sep 11, 2020
08cdb98
Merge remote-tracking branch 'upstream/master' into metrics/new_class…
SkafteNicki Sep 13, 2020
a9ee01c
timeouts
Borda Sep 14, 2020
a128f1f
Merge branch 'master' into metrics/new_class_reduce
SkafteNicki Sep 14, 2020
a3358a7
docstring formatting for reg metrics
rohitgr7 Sep 14, 2020
1c1cacc
pep
rohitgr7 Sep 14, 2020
5b5dfd1
flake8
rohitgr7 Sep 14, 2020
a758527
revert workflow changes
SkafteNicki Sep 15, 2020
da81363
suggestions
SkafteNicki Sep 15, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,8 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).

- Added hooks to metric module interface ([#2528](https://github.com/PyTorchLightning/pytorch-lightning/pull/2528/))

- Added `class_reduction` similar to sklearn for classification metrics ([#3322](https://github.com/PyTorchLightning/pytorch-lightning/pull/3322))

### Changed


Expand Down Expand Up @@ -125,7 +127,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Fixed adding val step argument to metrics ([#2986](https://github.com/PyTorchLightning/pytorch-lightning/pull/2986))
- Fixed an issue that caused `Trainer.test()` to stall in ddp mode ([#2997](https://github.com/PyTorchLightning/pytorch-lightning/pull/2997))
- Fixed gathering of results with tensors of varying shape ([#3020](https://github.com/PyTorchLightning/pytorch-lightning/pull/3020))
- Fixed batch size auto-scaling feature to set the new value on the correct model attribute ([#3043](https://github.com/PyTorchLightning/pytorch-lightning/pull/3043))
- Fixed batch size auto-scaling feature to set the new value on the correct model attribute ([#3043](https://github.com/PyTorchLightning/pytorch-lightning/pull/3043))
- Fixed automatic batch scaling not working with half precision ([#3045](https://github.com/PyTorchLightning/pytorch-lightning/pull/3045))
- Fixed setting device to root gpu ([#3042](https://github.com/PyTorchLightning/pytorch-lightning/pull/3042))

Expand Down
120 changes: 62 additions & 58 deletions pytorch_lightning/metrics/classification.py
Original file line number Diff line number Diff line change
Expand Up @@ -52,26 +52,28 @@ class Accuracy(TensorMetric):
def __init__(
self,
num_classes: Optional[int] = None,
reduction: str = 'elementwise_mean',
class_reduction: str = 'micro',
reduce_group: Any = None,
reduce_op: Any = None,
):
"""
Args:
num_classes: number of classes
reduction: a method to reduce metric score over labels (default: takes the mean)
Available reduction methods:
- elementwise_mean: takes the mean
- none: pass array
- sum: add elements
class_reduction: reduction method for multiclass problems
SkafteNicki marked this conversation as resolved.
Show resolved Hide resolved

- ``'micro'``: calculate metrics globally (default)
- ``'macro'``: calculate metrics for each label, and find their unweighted mean.
- ``'weighted'``: calculate metrics for each label, and find their unweighted mean.
rohitgr7 marked this conversation as resolved.
Show resolved Hide resolved
- ``'none'``: returns calculated metric per class

reduce_group: the process group to reduce metric results from DDP
reduce_op: the operation to perform for ddp reduction
"""
super().__init__(name='accuracy',
reduce_group=reduce_group,
reduce_op=reduce_op)
self.num_classes = num_classes
self.reduction = reduction
self.class_reduction = class_reduction
Borda marked this conversation as resolved.
Show resolved Hide resolved

def forward(self, pred: torch.Tensor, target: torch.Tensor) -> torch.Tensor:
"""
Expand All @@ -85,7 +87,7 @@ def forward(self, pred: torch.Tensor, target: torch.Tensor) -> torch.Tensor:
A Tensor with the classification score.
"""
return accuracy(pred=pred, target=target,
num_classes=self.num_classes, reduction=self.reduction)
num_classes=self.num_classes, class_reduction=self.class_reduction)


class ConfusionMatrix(TensorMetric):
Expand Down Expand Up @@ -146,13 +148,10 @@ class PrecisionRecallCurve(TensorCollectionMetric):
>>> pred = torch.tensor([0, 1, 2, 3])
>>> target = torch.tensor([0, 1, 2, 2])
>>> metric = PrecisionRecallCurve()
>>> prec, recall, thr = metric(pred, target)
>>> prec
tensor([0.3333, 0.0000, 0.0000, 1.0000])
>>> recall
tensor([1., 0., 0., 0.])
>>> thr
tensor([1., 2., 3.])
>>> metric(pred, target) # doctest: +NORMALIZE_WHITESPACE
(tensor([0.3333, 0.0000, 0.0000, 1.0000]),
tensor([1., 0., 0., 0.]),
tensor([1., 2., 3.]))
SkafteNicki marked this conversation as resolved.
Show resolved Hide resolved

"""

Expand Down Expand Up @@ -206,7 +205,7 @@ class Precision(TensorMetric):

>>> pred = torch.tensor([0, 1, 2, 3])
>>> target = torch.tensor([0, 1, 2, 2])
>>> metric = Precision(num_classes=4)
>>> metric = Precision(num_classes=4, class_reduction='macro')
>>> metric(pred, target)
tensor(0.7500)

Expand All @@ -215,26 +214,28 @@ class Precision(TensorMetric):
def __init__(
self,
num_classes: Optional[int] = None,
reduction: str = 'elementwise_mean',
class_reduction: str = 'micro',
reduce_group: Any = None,
reduce_op: Any = None,
):
"""
Args:
num_classes: number of classes
reduction: a method to reduce metric score over labels (default: takes the mean)
Available reduction methods:
- elementwise_mean: takes the mean
- none: pass array
- sum: add elements
class_reduction: reduction method for multiclass problems

- ``'micro'``: calculate metrics globally (default)
- ``'macro'``: calculate metrics for each label, and find their unweighted mean.
- ``'weighted'``: calculate metrics for each label, and find their unweighted mean.
- ``'none'``: returns calculated metric per class

reduce_group: the process group to reduce metric results from DDP
reduce_op: the operation to perform for ddp reduction
"""
super().__init__(name='precision',
reduce_group=reduce_group,
reduce_op=reduce_op)
self.num_classes = num_classes
self.reduction = reduction
self.class_reduction = class_reduction
Borda marked this conversation as resolved.
Show resolved Hide resolved

def forward(self, pred: torch.Tensor, target: torch.Tensor) -> torch.Tensor:
"""
Expand All @@ -249,7 +250,7 @@ def forward(self, pred: torch.Tensor, target: torch.Tensor) -> torch.Tensor:
"""
return precision(pred=pred, target=target,
num_classes=self.num_classes,
reduction=self.reduction)
class_reduction=self.class_reduction)


class Recall(TensorMetric):
Expand All @@ -262,25 +263,27 @@ class Recall(TensorMetric):
>>> target = torch.tensor([0, 1, 2, 2])
>>> metric = Recall()
>>> metric(pred, target)
tensor(0.6250)
tensor(0.7500)

"""

def __init__(
self,
num_classes: Optional[int] = None,
reduction: str = 'elementwise_mean',
class_reduction: str = 'micro',
reduce_group: Any = None,
reduce_op: Any = None,
):
"""
Args:
num_classes: number of classes
reduction: a method to reduce metric score over labels (default: takes the mean)
Available reduction methods:
- elementwise_mean: takes the mean
- none: pass array
- sum: add elements
class_reduction: reduction method for multiclass problems

- ``'micro'``: calculate metrics globally (default)
- ``'macro'``: calculate metrics for each label, and find their unweighted mean.
- ``'weighted'``: calculate metrics for each label, and find their unweighted mean.
- ``'none'``: returns calculated metric per class

reduce_group: the process group to reduce metric results from DDP
reduce_op: the operation to perform for ddp reduction
"""
Expand All @@ -289,7 +292,7 @@ def __init__(
reduce_op=reduce_op)

self.num_classes = num_classes
self.reduction = reduction
self.class_reduction = class_reduction
Borda marked this conversation as resolved.
Show resolved Hide resolved

def forward(self, pred: torch.Tensor, target: torch.Tensor) -> torch.Tensor:
"""
Expand All @@ -305,7 +308,7 @@ def forward(self, pred: torch.Tensor, target: torch.Tensor) -> torch.Tensor:
return recall(pred=pred,
target=target,
num_classes=self.num_classes,
reduction=self.reduction)
class_reduction=self.class_reduction)


class AveragePrecision(TensorMetric):
Expand Down Expand Up @@ -425,7 +428,7 @@ class FBeta(TensorMetric):

>>> pred = torch.tensor([0, 1, 2, 3])
>>> target = torch.tensor([0, 1, 2, 2])
>>> metric = FBeta(0.25)
>>> metric = FBeta(0.25, class_reduction='macro')
>>> metric(pred, target)
tensor(0.7361)
"""
Expand All @@ -434,19 +437,21 @@ def __init__(
self,
beta: float,
num_classes: Optional[int] = None,
reduction: str = 'elementwise_mean',
class_reduction: str = 'micro',
reduce_group: Any = None,
reduce_op: Any = None,
):
"""
Args:
beta: determines the weight of recall in the combined score.
num_classes: number of classes
reduction: a method to reduce metric score over labels (default: takes the mean)
Available reduction methods:
- elementwise_mean: takes the mean
- none: pass array
- sum: add elements
class_reduction: reduction method for multiclass problems

- ``'micro'``: calculate metrics globally (default)
- ``'macro'``: calculate metrics for each label, and find their unweighted mean.
- ``'weighted'``: calculate metrics for each label, and find their unweighted mean.
- ``'none'``: returns calculated metric per class

reduce_group: the process group to reduce metric results from DDP
reduce_op: the operation to perform for DDP reduction
"""
Expand All @@ -456,7 +461,7 @@ def __init__(

self.beta = beta
self.num_classes = num_classes
self.reduction = reduction
self.class_reduction = class_reduction
Borda marked this conversation as resolved.
Show resolved Hide resolved

def forward(self, pred: torch.Tensor, target: torch.Tensor) -> torch.Tensor:
"""
Expand All @@ -471,7 +476,7 @@ def forward(self, pred: torch.Tensor, target: torch.Tensor) -> torch.Tensor:
"""
return fbeta_score(pred=pred, target=target,
beta=self.beta, num_classes=self.num_classes,
reduction=self.reduction)
class_reduction=self.class_reduction)


class F1(TensorMetric):
Expand All @@ -483,26 +488,28 @@ class F1(TensorMetric):

>>> pred = torch.tensor([0, 1, 2, 3])
>>> target = torch.tensor([0, 1, 2, 2])
>>> metric = F1()
>>> metric = F1(class_reduction='macro')
>>> metric(pred, target)
tensor(0.6667)
"""

def __init__(
self,
num_classes: Optional[int] = None,
reduction: str = 'elementwise_mean',
class_reduction: str = 'micro',
reduce_group: Any = None,
reduce_op: Any = None,
):
"""
Args:
num_classes: number of classes
reduction: a method to reduce metric score over labels (default: takes the mean)
Available reduction methods:
- elementwise_mean: takes the mean
- none: pass array
- sum: add elements
class_reduction: reduction method for multiclass problems

- ``'micro'``: calculate metrics globally (default)
- ``'macro'``: calculate metrics for each label, and find their unweighted mean.
- ``'weighted'``: calculate metrics for each label, and find their unweighted mean.
- ``'none'``: returns calculated metric per class

reduce_group: the process group to reduce metric results from DDP
reduce_op: the operation to perform for ddp reduction
"""
Expand All @@ -511,7 +518,7 @@ def __init__(
reduce_op=reduce_op)

self.num_classes = num_classes
self.reduction = reduction
self.class_reduction = class_reduction
Borda marked this conversation as resolved.
Show resolved Hide resolved

def forward(self, pred: torch.Tensor, target: torch.Tensor) -> torch.Tensor:
"""
Expand All @@ -526,7 +533,7 @@ def forward(self, pred: torch.Tensor, target: torch.Tensor) -> torch.Tensor:
"""
return f1_score(pred=pred, target=target,
num_classes=self.num_classes,
reduction=self.reduction)
class_reduction=self.class_reduction)


class ROC(TensorCollectionMetric):
Expand All @@ -538,13 +545,10 @@ class ROC(TensorCollectionMetric):
>>> pred = torch.tensor([0, 1, 2, 3])
>>> target = torch.tensor([0, 1, 2, 2])
>>> metric = ROC()
>>> fps, tps, thresholds = metric(pred, target)
>>> fps
tensor([0.0000, 0.3333, 0.6667, 0.6667, 1.0000])
>>> tps
tensor([0., 0., 0., 1., 1.])
>>> thresholds
tensor([4., 3., 2., 1., 0.])
>>> metric(pred, target) # doctest: +NORMALIZE_WHITESPACE
(tensor([0.0000, 0.3333, 0.6667, 0.6667, 1.0000]),
tensor([0., 0., 0., 1., 1.]),
tensor([4., 3., 2., 1., 0.]))

"""

Expand Down
Loading