-
-
Notifications
You must be signed in to change notification settings - Fork 624
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Skip metric computation if output is None #1065
Comments
Why do not return the tensor |
@sdesrozis for XLA (see the link above):
IMO, such internal behaviour will be opaque for user who does not intend to skip loss value by returning a tensor. Another point is which rate we should use internally ? |
Ok I agree what you said. But I’m really not fan, I hope it will be fixed by xla (if possible...) |
An update on that. I tried to implement skipping and there can a potential roadblock with |
|
🚀 Feature
Considering the context of #1024
and another point about using
tensor.item()
on each iteration with TPU which slows down the computation, we need to have a way to skip usingstate.output
if set toNone
.In training on TPU
however, if we have a running metric attached to the output it will fail if encounter
None
.So, idea is to check
engine.state.output
here and return before doing any metric's update:ignite/ignite/metrics/metric.py
Lines 123 to 124 in e3fc04e
@sdesrozis @erip any thoughts ?
This isssue is related to #996 but the difference is that we still want to execute all other handlers on iteration even if output is None.
The text was updated successfully, but these errors were encountered: