You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A way to wrap multiple Retrieval metrics together in order to speed metric computing.
Motivation
When I want to compute k metrics with same (indexes, preds, target) data, TorchMetrics will group indexes k times. When the number of groups is very large, it takes a very long time to do this.
Pitch
I function or class take a list of metrics name to achieve this.
Alternatives
Additional context
The text was updated successfully, but these errors were encountered:
Hi @ZeguanXiao,
I am happy to report that this issue should already been solved on master. In this PR #709 we introduced the concept of compute groups that will automatically group together computations from metrics that share the same underlying metric state (like all retrieval metrics). In the run the following example, that uses 3 different retrieval metrics:
I get
Old metric collection: 1.651695966720581
New metric collection: 0.5793600082397461
meaning that with compute groups enabled (will be by default) it is 3 times faster which corresponds to only one metric actually being updated.
Closing this issue.
🚀 Feature
A way to wrap multiple Retrieval metrics together in order to speed metric computing.
Motivation
When I want to compute k metrics with same (indexes, preds, target) data, TorchMetrics will group indexes k times. When the number of groups is very large, it takes a very long time to do this.
Pitch
I function or class take a list of metrics name to achieve this.
Alternatives
Additional context
The text was updated successfully, but these errors were encountered: