Skip to content

Commit

Permalink
Update doc/train/parallel-training.md
Browse files Browse the repository at this point in the history
Co-authored-by: Jinzhe Zeng <[email protected]>
Signed-off-by: Lysithea <[email protected]>
  • Loading branch information
CaRoLZhangxy and njzjz authored Mar 2, 2024
1 parent ae68fa6 commit 43fa28e
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion doc/train/parallel-training.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ optional arguments:
master)
```

## PyTorch Implementation
## PyTorch Implementation {{ pytorch_icon }}

Currently, parallel training in pytorch version is implemented in the form of PyTorch Distributed Data Parallelism [DDP](https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html).
DeePMD-kit will decide whether to launch the training in parallel (distributed) mode or in serial mode depending on your execution command.
Expand Down

0 comments on commit 43fa28e

Please sign in to comment.