You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 19, 2022. It is now read-only.
If there is 2GPU per node, how to set the Worker spec In the PytorchJob
1 replicas with 2GPU per pod or 2 replicas with only 1GPU per pod?
I've seen similar issues: #219 , but there is no clear instrunctions on whether multi-gpu-per-pod setup be supported in PytorchJob ?
is pytorch-operator designed for 1-gpu-per-pod setup even through there is multi-gpu on the same node?
will multi-gpu-per-pod setup be supported ?
The text was updated successfully, but these errors were encountered:
tingweiwu
changed the title
confusion about muti-gpu per one pod
how to submit the PytorchJob when there is multi-gpus on the same node
Apr 27, 2021
tingweiwu
changed the title
how to submit the PytorchJob when there is multi-gpus on the same node
whether multi-gpu-per-pod setup be supported in PytorchJob
Apr 27, 2021
If there is 2GPU per node, how to set the Worker spec In the PytorchJob
1 replicas with 2GPU per pod or 2 replicas with only 1GPU per pod?
I've seen similar issues: #219 , but there is no clear instrunctions on whether multi-gpu-per-pod setup be supported in PytorchJob ?
is pytorch-operator designed for 1-gpu-per-pod setup even through there is multi-gpu on the same node?
will multi-gpu-per-pod setup be supported ?
The text was updated successfully, but these errors were encountered: