Skip to content

Commit

Permalink
Adding initialization for num_pipelines_per_node (facebookresearch#…
Browse files Browse the repository at this point in the history
…1599)

Summary:
…hod` to avoid unbounded local error.

# Before submitting

- [ ] Was this discussed/approved via a Github issue? (no need for typos, doc improvements)
- [ ] Did you read the [contributor guideline](https://github.com/pytorch/fairseq/blob/master/CONTRIBUTING.md)?
- [ ] Did you make sure to update the docs?
- [ ] Did you write any new necessary tests?

## What does this PR do?
Adding initialization for `num_pipelines_per_node` in `infer_init_method` in `distributed/utils.py`

## PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

## Did you have fun?
Make sure you had fun coding �

Pull Request resolved: https://github.com/fairinternal/fairseq-py/pull/1599

Reviewed By: myleott

Differential Revision: D26208044

Pulled By: girifb

fbshipit-source-id: 98d3c0b70b59a5e0abb027850baa3bc44d9c3c78
  • Loading branch information
Giri Anantharaman authored and harkash committed Feb 23, 2021
1 parent 063d7e3 commit bdc5486
Showing 1 changed file with 1 addition and 0 deletions.
1 change: 1 addition & 0 deletions fairseq/distributed/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,7 @@ def infer_init_method(cfg: DistributedTrainingConfig, force_distributed=False):
if cfg.distributed_init_method is not None or cfg.tpu:
return

num_pipelines_per_node = None
if cfg.pipeline_model_parallel:
num_pipeline_devices, num_pipelines_per_node = _pipeline_parallel_pre_init(cfg)

Expand Down

0 comments on commit bdc5486

Please sign in to comment.