Skip to content

Commit

Permalink
fix typo (#3343)
Browse files Browse the repository at this point in the history
  • Loading branch information
mohamed-ali authored May 26, 2022
1 parent 0fedea3 commit b704aaa
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion training/distributed_training/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ training as quickly as possible or at least within a constrained time period.
Then, distributed training is scaled to use a cluster of multiple nodes, meaning
not just multiple GPUs in a computing instance, but multiple instances with
multiple GPUs. As the cluster size increases, so does the significant drop in
performance. This drop in performance is primarily caused the communications
performance. This drop in performance is primarily caused by the communications
overhead between nodes in a cluster.

SageMaker distributed (SMD) offers two options for distributed training:
Expand Down

0 comments on commit b704aaa

Please sign in to comment.