You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Issue will be fixed by classification refactor: see this issue #1001 and this PR #1195 for all changes
Small recap: This issue describes that jaccard_index is wrongly calculated in the multilabel setting. This is simply due to a wrong implementation. Issue have been fixed in the refactor such that everything should be right (our implementation is better tested against sklearn now). Only difference is that instead of using jaccard_index the specialized version multilabel_jaccard_index should be used:
🐛 Bug
JaccardIndex is not correctly coded when multilabel=True and average is different than "none".
To Reproduce
Instantiate JaccardIndex(..., multilabel=True, average="micro") and call it as usually with a multilabel classification data.
Code sample
Environment
Additional context
When multilabel=True, you call _jaccard_from_confmat and then access to index 1 (this is correct)
https://github.com/Lightning-AI/metrics/blob/v0.9.3/torchmetrics/classification/jaccard.py#L117
Possible implementation snippet
With this IoU (jaccard) base implementation, you can easily organize different combinations (multilabel=True + macro, multilabel=False + micro, etc)
For example:
Micro + multilabel=True
Macro + multilabel=True
Macro + multilabel=False
And so on.
The text was updated successfully, but these errors were encountered: