From 73ded17acca299356c8bfce19f9c8938ad6a8e41 Mon Sep 17 00:00:00 2001 From: Xinyu Ye Date: Wed, 30 Nov 2022 13:32:36 +0800 Subject: [PATCH] fixed typo Signed-off-by: Xinyu Ye --- .../MobileNetV2-0.35/distillation/eager/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/examples/pytorch/image_recognition/MobileNetV2-0.35/distillation/eager/README.md b/examples/pytorch/image_recognition/MobileNetV2-0.35/distillation/eager/README.md index b9941840ef3..d449d5f797b 100644 --- a/examples/pytorch/image_recognition/MobileNetV2-0.35/distillation/eager/README.md +++ b/examples/pytorch/image_recognition/MobileNetV2-0.35/distillation/eager/README.md @@ -14,7 +14,7 @@ We also supported Distributed Data Parallel training on single node and multi no
For example, bash command will look like the following, where *``* is the address of the master node, it won't be necessary for single node case, *``* is the desired processes to use in current node, for node with GPU, usually set to number of GPUs in this node, for node without GPU and use CPU for training, it's recommended set to 1, *``* is the number of nodes to use, *``* is the rank of the current node, rank starts from 0 to *``*`-1`.
-Also please note that to use CPU for training in each node with multi nodes settings, argument `--no_cuda` is mandatory. In multi nodes setting, following command needs to be lanuched in each node, and all the commands should be the same except for *``*, which should be integer from 0 to *``*`-1` assigned to each node. +Also please note that to use CPU for training in each node with multi nodes settings, argument `--no_cuda` is mandatory. In multi nodes setting, following command needs to be launched in each node, and all the commands should be the same except for *``*, which should be integer from 0 to *``*`-1` assigned to each node. ```bash python -m torch.distributed.launch --master_addr= --nproc_per_node= --nnodes= --node_rank= \