Skip to content

Commit

Permalink
Update doc/train/parallel-training.md
Browse files Browse the repository at this point in the history
Co-authored-by: Jinzhe Zeng <[email protected]>
Signed-off-by: Lysithea <[email protected]>
  • Loading branch information
CaRoLZhangxy and njzjz authored Mar 2, 2024
1 parent 8107b02 commit ae68fa6
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion doc/train/parallel-training.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
**Supported backends**: TensorFlow {{ tensorflow_icon }}, PyTorch {{ pytorch_icon }}
:::

## TensorFlow Implementation
## TensorFlow Implementation {{ tensorflow_icon }}
Currently, parallel training in tensorflow version is enabled in a synchronized way with help of [Horovod](https://github.com/horovod/horovod).
Depending on the number of training processes (according to MPI context) and the number of GPU cards available, DeePMD-kit will decide whether to launch the training in parallel (distributed) mode or in serial mode. Therefore, no additional options are specified in your JSON/YAML input file.

Expand Down

0 comments on commit ae68fa6

Please sign in to comment.