We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fx_validator
trainer.call_hook
Metric
torchmetrics
is_tpu_distributed()
Fault Tolerant Logging
load_from_state_dict
trainer loops: fit_loop: epoch_loop: batch_loop: validation_loop: ... validate_loop: ... test_loop: ... ------ and each loop has: a_loop: progress results
The text was updated successfully, but these errors were encountered:
{,load_}state_dict
ResultCollection
carmocca
No branches or pull requests
🚀 Feature
fx_validator
for all functions: https://github.com/PyTorchLightning/pytorch-lightning/blob/764d2c775e2f3568975a60e94520bca8a42b2490/pytorch_lightning/trainer/connectors/logger_connector/fx_validator.py#L83. Always usetrainer.call_hook
#8498Metric
to sync. We gather+reduce together (per training type) whereastorchmetrics
does it separately. Would also need to injectis_tpu_distributed()
logic. https://github.com/PyTorchLightning/pytorch-lightning/blob/764d2c775e2f3568975a60e94520bca8a42b2490/pytorch_lightning/trainer/connectors/logger_connector/result_new.py#L176-L193Fault Tolerant Logging
load_from_state_dict
to ResultCollectionThe text was updated successfully, but these errors were encountered: