forked from aws/amazon-sagemaker-examples
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Sagemaker PyTorch Distributed Model Parallel GPT2 Updates (aws#3311)
* Update changes pertaining to gpt2 example * Remove transformers dep as using official Sagemaker DLC * Indentation fix * Update training/distributed_training/pytorch/model_parallel/gpt2/smp-train-gpt-simple.ipynb Co-authored-by: Miyoung <[email protected]> * Update training/distributed_training/pytorch/model_parallel/gpt2/train_gpt_simple.py Co-authored-by: Miyoung <[email protected]> * Update training/distributed_training/pytorch/model_parallel/gpt2/train_gpt_simple.py Co-authored-by: Miyoung <[email protected]> * Disambiguate smp parameters from training parameters * Upadted comments for SMP related modifications * Remove SM Experiments instance * Remove unneccesary condition * pytorch -> huggingface comment * Update branding * Comment formatting Co-authored-by: Suhit Kodgule <[email protected]> Co-authored-by: Miyoung <[email protected]>
- Loading branch information
1 parent
4ccb050
commit ccc102a
Showing
5 changed files
with
166 additions
and
111 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
37 changes: 37 additions & 0 deletions
37
training/distributed_training/pytorch/model_parallel/gpt2/memory_tracker.py
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,37 @@ | ||
import smdistributed.modelparallel.torch as smp | ||
import torch | ||
|
||
def memory_status(msg="", reset_max=True, sync=True): | ||
|
||
rank = smp.rank() | ||
tp_rank = smp.tp_rank() | ||
pp_rank = smp.pp_rank() | ||
rdp_rank = smp.rdp_rank() | ||
local_rank = smp.local_rank() | ||
|
||
if sync: | ||
torch.cuda.synchronize() | ||
|
||
if rdp_rank != 0: | ||
return | ||
|
||
alloced = torch.cuda.memory_allocated(device=local_rank) | ||
max_alloced = torch.cuda.max_memory_allocated(device=local_rank) | ||
cached = torch.cuda.memory_reserved(device=local_rank) | ||
max_cached = torch.cuda.max_memory_reserved(device=local_rank) | ||
|
||
# convert to GB for printing | ||
alloced /= 1024**3 | ||
cached /= 1024**3 | ||
max_alloced /= 1024**3 | ||
max_cached /= 1024**3 | ||
|
||
print( | ||
f'[{msg}] rank {rank} tp_rank {tp_rank} pp_rank {pp_rank} TORCH {torch.__version__}', | ||
f'device={local_rank} ' | ||
f'alloc {alloced:0.4f} max_alloced {max_alloced:0.4f} ' | ||
f'cache {cached:0.4f} max_cached {max_cached:0.4f}' | ||
) | ||
if reset_max: | ||
torch.cuda.reset_max_memory_cached() | ||
torch.cuda.reset_max_memory_allocated() |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -4,6 +4,5 @@ sagemaker | |
sagemaker-experiments | ||
scipy | ||
torchnet | ||
transformers==4.4.2 | ||
smdebug | ||
humanize |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.