Feat: Warn when doing local torch elastic training with nnodes > 1 #1697
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
TL;DR
With
@task(task_config=Elastic(...))
one can perform training with torch elastic launch (torchrun
).This works both locally as well as in a cluster with a kubeflow
PyTorchJob
.When executing a workflow locally, i.e.
python workflow.py
, but setting e.g.Elastic(nnodes=2)
, the rendezvous of the workers will timeout because the workers wait for the non-existing workers from the non-existing 2nd node to join.One would have to set the log level to debug in order to see that torch is waiting for the rendezvous to complete. By default, the workflow appears to not do anything.
I thins PR I add a warning log message that informs the user about this.
Type
Are all requirements met?
Complete description
I check for an environment variable that is set by the kubeflow training operator. If this is not set but the user set
nnodes>1
, the warning is emitted.One could discuss whether we should just automatically switch to
nnodes=1
if the environment variables for distributed training have not been set by the training operator but I found this too intrusive. Warning the user, however, should be done.Tracking Issue
NA
Follow-up issue
NA