From ae68fa632040a873b0555f9d29328215cb166133 Mon Sep 17 00:00:00 2001 From: Lysithea <52808607+CaRoLZhangxy@users.noreply.github.com> Date: Sat, 2 Mar 2024 16:36:34 +0800 Subject: [PATCH] Update doc/train/parallel-training.md Co-authored-by: Jinzhe Zeng Signed-off-by: Lysithea <52808607+CaRoLZhangxy@users.noreply.github.com> --- doc/train/parallel-training.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/train/parallel-training.md b/doc/train/parallel-training.md index 82a1f932ea..c492be5490 100644 --- a/doc/train/parallel-training.md +++ b/doc/train/parallel-training.md @@ -4,7 +4,7 @@ **Supported backends**: TensorFlow {{ tensorflow_icon }}, PyTorch {{ pytorch_icon }} ::: -## TensorFlow Implementation +## TensorFlow Implementation {{ tensorflow_icon }} Currently, parallel training in tensorflow version is enabled in a synchronized way with help of [Horovod](https://github.com/horovod/horovod). Depending on the number of training processes (according to MPI context) and the number of GPU cards available, DeePMD-kit will decide whether to launch the training in parallel (distributed) mode or in serial mode. Therefore, no additional options are specified in your JSON/YAML input file.