You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The MulticlassRecall function with a top_k value greater than 1 and the average parameter set to "macro" is not behaving as anticipated. Ideally, as top_k increases, the results should improve. However, on certain occasions, this isn't the case.
In a multiclass scenario, when calculating top-k results, the number of false positives (fp) tends to increase with a higher value of k. This, in turn, augments the value of weights.sum(-1, keepdim=True) and consequently reduces the final recall@k. Also when we calculate recall, should it be weights[tp + fn == 0] = 0.0 ?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
The MulticlassRecall function with a top_k value greater than 1 and the average parameter set to "
macro
" is not behaving as anticipated. Ideally, as top_k increases, the results should improve. However, on certain occasions, this isn't the case.To Reproduce
it returns
tensor(0.) tensor(0.0357) tensor(0.0213) tensor(0.0154)
Expected behavior
The results should rise as k grows.
Environment
pip
):Additional context
I checked the function _adjust_weights_safe_divide where it calculates the recall for function _precision_recall_reduce
and am unsure of this snippets:
In a multiclass scenario, when calculating top-k results, the number of false positives (fp) tends to increase with a higher value of k. This, in turn, augments the value of
weights.sum(-1, keepdim=True)
and consequently reduces the final recall@k. Also when we calculate recall, should it beweights[tp + fn == 0] = 0.0 ?
Beta Was this translation helpful? Give feedback.
All reactions