-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixing bug in Megatron BERT when loss mask is all zeros #5424
Conversation
Signed-off-by: Shanmugam Ramasamy <[email protected]>
for more information, see https://pre-commit.ci
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey! Sorry to block merging, but I think this might be a symptom of a bug at the data level and we should try to get to the bottom of that and not handle it this way.
It related to the dataset @shanmugamr1992 is using. I didn't see this behavior in my biomegatron dataset before. |
@yidong72 @MaximumEntropy So this is not about my dataset. The number of tokens to be padded is calculated as (total_tokens. * masking_probability) . So if total tokens is 200 and masking probability is 0.15, then 30 random tokens are padded. However if the number of tokens is small (Say 4 or 5) meaning remaining of the 512 are just zero padded, we have this case that there is nothing to be predicted. (Loss mask is all zeros).
|
Signed-off-by: Shanmugam Ramasamy <[email protected]>
Signed-off-by: Shanmugam Ramasamy <[email protected]>
for more information, see https://pre-commit.ci
nemo/collections/nlp/data/language_modeling/megatron/dataset_utils.py
Outdated
Show resolved
Hide resolved
Signed-off-by: Shanmugam Ramasamy <[email protected]>
Signed-off-by: Shanmugam Ramasamy <[email protected]>
* Fixing bug when loss mask is fully zero Signed-off-by: Shanmugam Ramasamy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Update megatron_bert_model.py Signed-off-by: Shanmugam Ramasamy <[email protected]> * Update dataset_utils.py Signed-off-by: Shanmugam Ramasamy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Update dataset_utils.py Signed-off-by: Shanmugam Ramasamy <[email protected]> * Update dataset_utils.py Signed-off-by: Shanmugam Ramasamy <[email protected]> Signed-off-by: Shanmugam Ramasamy <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Sandeep Subramanian <[email protected]>
* first commit on eval_diar_with_asr.py Signed-off-by: Taejin Park <[email protected]> * Add a standalone diarization-ASR evaluation transcript Signed-off-by: Taejin Park <[email protected]> * Fixed examples in docstrings Signed-off-by: Taejin Park <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed staticmethod error Signed-off-by: Taejin Park <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Added description on eval modes Signed-off-by: Taejin Park <[email protected]> * adding diar_infer_general.yaml Signed-off-by: Taejin Park <[email protected]> * fix msdd_model in general yaml file Signed-off-by: Taejin Park <[email protected]> * fixed errors in yaml file Signed-off-by: Taejin Park <[email protected]> * combine into 1 commit Signed-off-by: Taejin Park <[email protected]> * Added description on eval modes Signed-off-by: Taejin Park <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Add MoE support for T5 model (w/o expert parallel) (#5409) * clean Signed-off-by: Abhinav Khattar <[email protected]> * kwarg ref Signed-off-by: Abhinav Khattar <[email protected]> * fix Signed-off-by: Abhinav Khattar <[email protected]> * fix Signed-off-by: Abhinav Khattar <[email protected]> * test Signed-off-by: Abhinav Khattar <[email protected]> * test Signed-off-by: Abhinav Khattar <[email protected]> * test Signed-off-by: Abhinav Khattar <[email protected]> * test Signed-off-by: Abhinav Khattar <[email protected]> * test Signed-off-by: Abhinav Khattar <[email protected]> * test Signed-off-by: Abhinav Khattar <[email protected]> * extra args Signed-off-by: Abhinav Khattar <[email protected]> * test Signed-off-by: Abhinav Khattar <[email protected]> * rm prints Signed-off-by: Abhinav Khattar <[email protected]> * style Signed-off-by: Abhinav Khattar <[email protected]> * review comments Signed-off-by: Abhinav Khattar <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * review comments Signed-off-by: Abhinav Khattar <[email protected]> * review comments Signed-off-by: Abhinav Khattar <[email protected]> * fix Signed-off-by: Abhinav Khattar <[email protected]> Signed-off-by: Abhinav Khattar <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Fix args (#5410) (#5416) Signed-off-by: MaximumEntropy <[email protected]> Signed-off-by: MaximumEntropy <[email protected]> Signed-off-by: MaximumEntropy <[email protected]> Co-authored-by: Sandeep Subramanian <[email protected]> * Fix for concat map dataset (#5133) * change for concat map dataset * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Exhaust longest dataset * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Co-authored-by: 1-800-BAD-CODE <> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Oleksii Kuchaiev <[email protected]> Co-authored-by: Sandeep Subramanian <[email protected]> * Add temporary fix for CUDA issue in Dockerfile (#5421) (#5422) Signed-off-by: Yu Yao <[email protected]> Signed-off-by: Yu Yao <[email protected]> Signed-off-by: Yu Yao <[email protected]> Co-authored-by: yaoyu-33 <[email protected]> * Fix GPT generation when using sentencepiece tokenizer (#5413) (#5428) * Fix Signed-off-by: MaximumEntropy <[email protected]> * Fix Signed-off-by: MaximumEntropy <[email protected]> Signed-off-by: MaximumEntropy <[email protected]> Co-authored-by: Yi Dong <[email protected]> Co-authored-by: Oleksii Kuchaiev <[email protected]> Signed-off-by: MaximumEntropy <[email protected]> Co-authored-by: Sandeep Subramanian <[email protected]> Co-authored-by: Yi Dong <[email protected]> Co-authored-by: Oleksii Kuchaiev <[email protected]> * Support for finetuning and finetuning inference with .ckpt files & batch size refactoring (#5339) * Initial refactor Signed-off-by: MaximumEntropy <[email protected]> * Resolve config before passing to load_from_checkpoint Signed-off-by: MaximumEntropy <[email protected]> * Fixes for model parallel and nemo restore Signed-off-by: MaximumEntropy <[email protected]> * Fixes for eval Signed-off-by: MaximumEntropy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Revert config changes Signed-off-by: MaximumEntropy <[email protected]> * Refactor Signed-off-by: MaximumEntropy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix typo Signed-off-by: MaximumEntropy <[email protected]> * Remove comments Signed-off-by: MaximumEntropy <[email protected]> * Minor Signed-off-by: MaximumEntropy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix validation reconfiguration Signed-off-by: MaximumEntropy <[email protected]> * Remove old comment Signed-off-by: MaximumEntropy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixes for test_ds Signed-off-by: MaximumEntropy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Signed-off-by: MaximumEntropy <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Revert "Add temporary fix for CUDA issue in Dockerfile (#5421)" (#5431) (#5432) This reverts commit 0718b17. Co-authored-by: yaoyu-33 <[email protected]> * [ITN] fix year date graph, cardinals extension for hundreds (#5435) * wip Signed-off-by: ekmb <[email protected]> * add lociko's hundreds extension for cardinals Signed-off-by: ekmb <[email protected]> * add optional end Signed-off-by: ekmb <[email protected]> * restart ci Signed-off-by: ekmb <[email protected]> Signed-off-by: ekmb <[email protected]> * update doc in terms of get_label for lang id model (#5366) * reflect PR 5278 ion doc Signed-off-by: fayejf <[email protected]> * reflect comment Signed-off-by: fayejf <[email protected]> Signed-off-by: fayejf <[email protected]> * Revert workaround for T5 that sets number of workers to 0 & sync_batch_comm=False (#5420) (#5433) * Revert workers workaround Signed-off-by: MaximumEntropy <[email protected]> * Fix in config Signed-off-by: MaximumEntropy <[email protected]> * Fix Signed-off-by: MaximumEntropy <[email protected]> Signed-off-by: MaximumEntropy <[email protected]> Co-authored-by: Oleksii Kuchaiev <[email protected]> Signed-off-by: MaximumEntropy <[email protected]> Co-authored-by: Sandeep Subramanian <[email protected]> Co-authored-by: Oleksii Kuchaiev <[email protected]> * Fixed bug in notebook (#5382) (#5394) Signed-off-by: Virginia Adams <[email protected]> Signed-off-by: Virginia Adams <[email protected]> Signed-off-by: Virginia Adams <[email protected]> Co-authored-by: Virginia Adams <[email protected]> * Fixing bug in Megatron BERT when loss mask is all zeros (#5424) * Fixing bug when loss mask is fully zero Signed-off-by: Shanmugam Ramasamy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Update megatron_bert_model.py Signed-off-by: Shanmugam Ramasamy <[email protected]> * Update dataset_utils.py Signed-off-by: Shanmugam Ramasamy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Update dataset_utils.py Signed-off-by: Shanmugam Ramasamy <[email protected]> * Update dataset_utils.py Signed-off-by: Shanmugam Ramasamy <[email protected]> Signed-off-by: Shanmugam Ramasamy <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Sandeep Subramanian <[email protected]> * Use updated API for overlapping grad sync with pipeline parallelism (#5236) Signed-off-by: Tim Moon <[email protected]> Signed-off-by: Tim Moon <[email protected]> * support to disable sequence length + 1 input tokens for each sample in MegatronGPT (#5363) * support to disable sequence length + 1 input tokens for MegatronGPT * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Co-authored-by: Anmol Gupta <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Sandeep Subramanian <[email protected]> * [TTS] Create script for processing TTS training audio (#5262) * Create script for processing TTS training audio * Update VAD trimming logic * Remove unused import Signed-off-by: Ryan <[email protected]> * [TTS] remove useless logic for set_tokenizer. (#5430) Signed-off-by: Xuesong Yang <[email protected]> * Fix setting up of `ReduceLROnPlateau` learning rate scheduler (#5444) * Fix tests Signed-off-by: PeganovAnton <[email protected]> * Add accidentally lost changes Signed-off-by: PeganovAnton <[email protected]> Signed-off-by: PeganovAnton <[email protected]> * Create codeql.yml (#5445) Signed-off-by: Somshubra Majumdar <[email protected]> Signed-off-by: Somshubra Majumdar <[email protected]> * Fix for getting tokenizer in character-based ASR models when using tarred dataset (#5442) Signed-off-by: Jonghwan Hyeon <[email protected]> Signed-off-by: Jonghwan Hyeon <[email protected]> * Combine 5 commits adding diar_infer_general.yaml Signed-off-by: Taejin Park <[email protected]> Update codeql.yml Signed-off-by: Somshubra Majumdar <[email protected]> Update codeql.yml Signed-off-by: Somshubra Majumdar <[email protected]> fix msdd_model in general yaml file Signed-off-by: Taejin Park <[email protected]> fixed errors in yaml file Signed-off-by: Taejin Park <[email protected]> * moved eval_der function and fixed tqdm options Signed-off-by: Taejin Park <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Changed minor error in docstrings Signed-off-by: Taejin Park <[email protected]> * removed score_labels and changed leave=True Signed-off-by: Taejin Park <[email protected]> Signed-off-by: Taejin Park <[email protected]> Signed-off-by: Abhinav Khattar <[email protected]> Signed-off-by: MaximumEntropy <[email protected]> Signed-off-by: Yu Yao <[email protected]> Signed-off-by: ekmb <[email protected]> Signed-off-by: fayejf <[email protected]> Signed-off-by: Virginia Adams <[email protected]> Signed-off-by: Shanmugam Ramasamy <[email protected]> Signed-off-by: Tim Moon <[email protected]> Signed-off-by: Ryan <[email protected]> Signed-off-by: Xuesong Yang <[email protected]> Signed-off-by: PeganovAnton <[email protected]> Signed-off-by: Somshubra Majumdar <[email protected]> Signed-off-by: Jonghwan Hyeon <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Abhinav Khattar <[email protected]> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Sandeep Subramanian <[email protected]> Co-authored-by: Shane Carroll <[email protected]> Co-authored-by: Oleksii Kuchaiev <[email protected]> Co-authored-by: yaoyu-33 <[email protected]> Co-authored-by: Yi Dong <[email protected]> Co-authored-by: Evelina <[email protected]> Co-authored-by: fayejf <[email protected]> Co-authored-by: Virginia Adams <[email protected]> Co-authored-by: Shanmugam Ramasamy <[email protected]> Co-authored-by: Tim Moon <[email protected]> Co-authored-by: anmolgupt <[email protected]> Co-authored-by: Anmol Gupta <[email protected]> Co-authored-by: Ryan Langman <[email protected]> Co-authored-by: Xuesong Yang <[email protected]> Co-authored-by: PeganovAnton <[email protected]> Co-authored-by: Somshubra Majumdar <[email protected]> Co-authored-by: Jonghwan Hyeon <[email protected]>
* first commit on eval_diar_with_asr.py Signed-off-by: Taejin Park <[email protected]> * Add a standalone diarization-ASR evaluation transcript Signed-off-by: Taejin Park <[email protected]> * Fixed examples in docstrings Signed-off-by: Taejin Park <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed staticmethod error Signed-off-by: Taejin Park <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Added description on eval modes Signed-off-by: Taejin Park <[email protected]> * adding diar_infer_general.yaml Signed-off-by: Taejin Park <[email protected]> * fix msdd_model in general yaml file Signed-off-by: Taejin Park <[email protected]> * fixed errors in yaml file Signed-off-by: Taejin Park <[email protected]> * combine into 1 commit Signed-off-by: Taejin Park <[email protected]> * Added description on eval modes Signed-off-by: Taejin Park <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Add MoE support for T5 model (w/o expert parallel) (NVIDIA#5409) * clean Signed-off-by: Abhinav Khattar <[email protected]> * kwarg ref Signed-off-by: Abhinav Khattar <[email protected]> * fix Signed-off-by: Abhinav Khattar <[email protected]> * fix Signed-off-by: Abhinav Khattar <[email protected]> * test Signed-off-by: Abhinav Khattar <[email protected]> * test Signed-off-by: Abhinav Khattar <[email protected]> * test Signed-off-by: Abhinav Khattar <[email protected]> * test Signed-off-by: Abhinav Khattar <[email protected]> * test Signed-off-by: Abhinav Khattar <[email protected]> * test Signed-off-by: Abhinav Khattar <[email protected]> * extra args Signed-off-by: Abhinav Khattar <[email protected]> * test Signed-off-by: Abhinav Khattar <[email protected]> * rm prints Signed-off-by: Abhinav Khattar <[email protected]> * style Signed-off-by: Abhinav Khattar <[email protected]> * review comments Signed-off-by: Abhinav Khattar <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * review comments Signed-off-by: Abhinav Khattar <[email protected]> * review comments Signed-off-by: Abhinav Khattar <[email protected]> * fix Signed-off-by: Abhinav Khattar <[email protected]> Signed-off-by: Abhinav Khattar <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Fix args (NVIDIA#5410) (NVIDIA#5416) Signed-off-by: MaximumEntropy <[email protected]> Signed-off-by: MaximumEntropy <[email protected]> Signed-off-by: MaximumEntropy <[email protected]> Co-authored-by: Sandeep Subramanian <[email protected]> * Fix for concat map dataset (NVIDIA#5133) * change for concat map dataset * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Exhaust longest dataset * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Co-authored-by: 1-800-BAD-CODE <> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Oleksii Kuchaiev <[email protected]> Co-authored-by: Sandeep Subramanian <[email protected]> * Add temporary fix for CUDA issue in Dockerfile (NVIDIA#5421) (NVIDIA#5422) Signed-off-by: Yu Yao <[email protected]> Signed-off-by: Yu Yao <[email protected]> Signed-off-by: Yu Yao <[email protected]> Co-authored-by: yaoyu-33 <[email protected]> * Fix GPT generation when using sentencepiece tokenizer (NVIDIA#5413) (NVIDIA#5428) * Fix Signed-off-by: MaximumEntropy <[email protected]> * Fix Signed-off-by: MaximumEntropy <[email protected]> Signed-off-by: MaximumEntropy <[email protected]> Co-authored-by: Yi Dong <[email protected]> Co-authored-by: Oleksii Kuchaiev <[email protected]> Signed-off-by: MaximumEntropy <[email protected]> Co-authored-by: Sandeep Subramanian <[email protected]> Co-authored-by: Yi Dong <[email protected]> Co-authored-by: Oleksii Kuchaiev <[email protected]> * Support for finetuning and finetuning inference with .ckpt files & batch size refactoring (NVIDIA#5339) * Initial refactor Signed-off-by: MaximumEntropy <[email protected]> * Resolve config before passing to load_from_checkpoint Signed-off-by: MaximumEntropy <[email protected]> * Fixes for model parallel and nemo restore Signed-off-by: MaximumEntropy <[email protected]> * Fixes for eval Signed-off-by: MaximumEntropy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Revert config changes Signed-off-by: MaximumEntropy <[email protected]> * Refactor Signed-off-by: MaximumEntropy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix typo Signed-off-by: MaximumEntropy <[email protected]> * Remove comments Signed-off-by: MaximumEntropy <[email protected]> * Minor Signed-off-by: MaximumEntropy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix validation reconfiguration Signed-off-by: MaximumEntropy <[email protected]> * Remove old comment Signed-off-by: MaximumEntropy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixes for test_ds Signed-off-by: MaximumEntropy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Signed-off-by: MaximumEntropy <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Revert "Add temporary fix for CUDA issue in Dockerfile (NVIDIA#5421)" (NVIDIA#5431) (NVIDIA#5432) This reverts commit 0718b17. Co-authored-by: yaoyu-33 <[email protected]> * [ITN] fix year date graph, cardinals extension for hundreds (NVIDIA#5435) * wip Signed-off-by: ekmb <[email protected]> * add lociko's hundreds extension for cardinals Signed-off-by: ekmb <[email protected]> * add optional end Signed-off-by: ekmb <[email protected]> * restart ci Signed-off-by: ekmb <[email protected]> Signed-off-by: ekmb <[email protected]> * update doc in terms of get_label for lang id model (NVIDIA#5366) * reflect PR 5278 ion doc Signed-off-by: fayejf <[email protected]> * reflect comment Signed-off-by: fayejf <[email protected]> Signed-off-by: fayejf <[email protected]> * Revert workaround for T5 that sets number of workers to 0 & sync_batch_comm=False (NVIDIA#5420) (NVIDIA#5433) * Revert workers workaround Signed-off-by: MaximumEntropy <[email protected]> * Fix in config Signed-off-by: MaximumEntropy <[email protected]> * Fix Signed-off-by: MaximumEntropy <[email protected]> Signed-off-by: MaximumEntropy <[email protected]> Co-authored-by: Oleksii Kuchaiev <[email protected]> Signed-off-by: MaximumEntropy <[email protected]> Co-authored-by: Sandeep Subramanian <[email protected]> Co-authored-by: Oleksii Kuchaiev <[email protected]> * Fixed bug in notebook (NVIDIA#5382) (NVIDIA#5394) Signed-off-by: Virginia Adams <[email protected]> Signed-off-by: Virginia Adams <[email protected]> Signed-off-by: Virginia Adams <[email protected]> Co-authored-by: Virginia Adams <[email protected]> * Fixing bug in Megatron BERT when loss mask is all zeros (NVIDIA#5424) * Fixing bug when loss mask is fully zero Signed-off-by: Shanmugam Ramasamy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Update megatron_bert_model.py Signed-off-by: Shanmugam Ramasamy <[email protected]> * Update dataset_utils.py Signed-off-by: Shanmugam Ramasamy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Update dataset_utils.py Signed-off-by: Shanmugam Ramasamy <[email protected]> * Update dataset_utils.py Signed-off-by: Shanmugam Ramasamy <[email protected]> Signed-off-by: Shanmugam Ramasamy <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Sandeep Subramanian <[email protected]> * Use updated API for overlapping grad sync with pipeline parallelism (NVIDIA#5236) Signed-off-by: Tim Moon <[email protected]> Signed-off-by: Tim Moon <[email protected]> * support to disable sequence length + 1 input tokens for each sample in MegatronGPT (NVIDIA#5363) * support to disable sequence length + 1 input tokens for MegatronGPT * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Co-authored-by: Anmol Gupta <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Sandeep Subramanian <[email protected]> * [TTS] Create script for processing TTS training audio (NVIDIA#5262) * Create script for processing TTS training audio * Update VAD trimming logic * Remove unused import Signed-off-by: Ryan <[email protected]> * [TTS] remove useless logic for set_tokenizer. (NVIDIA#5430) Signed-off-by: Xuesong Yang <[email protected]> * Fix setting up of `ReduceLROnPlateau` learning rate scheduler (NVIDIA#5444) * Fix tests Signed-off-by: PeganovAnton <[email protected]> * Add accidentally lost changes Signed-off-by: PeganovAnton <[email protected]> Signed-off-by: PeganovAnton <[email protected]> * Create codeql.yml (NVIDIA#5445) Signed-off-by: Somshubra Majumdar <[email protected]> Signed-off-by: Somshubra Majumdar <[email protected]> * Fix for getting tokenizer in character-based ASR models when using tarred dataset (NVIDIA#5442) Signed-off-by: Jonghwan Hyeon <[email protected]> Signed-off-by: Jonghwan Hyeon <[email protected]> * Combine 5 commits adding diar_infer_general.yaml Signed-off-by: Taejin Park <[email protected]> Update codeql.yml Signed-off-by: Somshubra Majumdar <[email protected]> Update codeql.yml Signed-off-by: Somshubra Majumdar <[email protected]> fix msdd_model in general yaml file Signed-off-by: Taejin Park <[email protected]> fixed errors in yaml file Signed-off-by: Taejin Park <[email protected]> * moved eval_der function and fixed tqdm options Signed-off-by: Taejin Park <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Changed minor error in docstrings Signed-off-by: Taejin Park <[email protected]> * removed score_labels and changed leave=True Signed-off-by: Taejin Park <[email protected]> Signed-off-by: Taejin Park <[email protected]> Signed-off-by: Abhinav Khattar <[email protected]> Signed-off-by: MaximumEntropy <[email protected]> Signed-off-by: Yu Yao <[email protected]> Signed-off-by: ekmb <[email protected]> Signed-off-by: fayejf <[email protected]> Signed-off-by: Virginia Adams <[email protected]> Signed-off-by: Shanmugam Ramasamy <[email protected]> Signed-off-by: Tim Moon <[email protected]> Signed-off-by: Ryan <[email protected]> Signed-off-by: Xuesong Yang <[email protected]> Signed-off-by: PeganovAnton <[email protected]> Signed-off-by: Somshubra Majumdar <[email protected]> Signed-off-by: Jonghwan Hyeon <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Abhinav Khattar <[email protected]> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Sandeep Subramanian <[email protected]> Co-authored-by: Shane Carroll <[email protected]> Co-authored-by: Oleksii Kuchaiev <[email protected]> Co-authored-by: yaoyu-33 <[email protected]> Co-authored-by: Yi Dong <[email protected]> Co-authored-by: Evelina <[email protected]> Co-authored-by: fayejf <[email protected]> Co-authored-by: Virginia Adams <[email protected]> Co-authored-by: Shanmugam Ramasamy <[email protected]> Co-authored-by: Tim Moon <[email protected]> Co-authored-by: anmolgupt <[email protected]> Co-authored-by: Anmol Gupta <[email protected]> Co-authored-by: Ryan Langman <[email protected]> Co-authored-by: Xuesong Yang <[email protected]> Co-authored-by: PeganovAnton <[email protected]> Co-authored-by: Somshubra Majumdar <[email protected]> Co-authored-by: Jonghwan Hyeon <[email protected]> Signed-off-by: shane carroll <[email protected]>
* Fixing bug when loss mask is fully zero Signed-off-by: Shanmugam Ramasamy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Update megatron_bert_model.py Signed-off-by: Shanmugam Ramasamy <[email protected]> * Update dataset_utils.py Signed-off-by: Shanmugam Ramasamy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Update dataset_utils.py Signed-off-by: Shanmugam Ramasamy <[email protected]> * Update dataset_utils.py Signed-off-by: Shanmugam Ramasamy <[email protected]> Signed-off-by: Shanmugam Ramasamy <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Sandeep Subramanian <[email protected]> Signed-off-by: Hainan Xu <[email protected]>
* first commit on eval_diar_with_asr.py Signed-off-by: Taejin Park <[email protected]> * Add a standalone diarization-ASR evaluation transcript Signed-off-by: Taejin Park <[email protected]> * Fixed examples in docstrings Signed-off-by: Taejin Park <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed staticmethod error Signed-off-by: Taejin Park <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Added description on eval modes Signed-off-by: Taejin Park <[email protected]> * adding diar_infer_general.yaml Signed-off-by: Taejin Park <[email protected]> * fix msdd_model in general yaml file Signed-off-by: Taejin Park <[email protected]> * fixed errors in yaml file Signed-off-by: Taejin Park <[email protected]> * combine into 1 commit Signed-off-by: Taejin Park <[email protected]> * Added description on eval modes Signed-off-by: Taejin Park <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Add MoE support for T5 model (w/o expert parallel) (NVIDIA#5409) * clean Signed-off-by: Abhinav Khattar <[email protected]> * kwarg ref Signed-off-by: Abhinav Khattar <[email protected]> * fix Signed-off-by: Abhinav Khattar <[email protected]> * fix Signed-off-by: Abhinav Khattar <[email protected]> * test Signed-off-by: Abhinav Khattar <[email protected]> * test Signed-off-by: Abhinav Khattar <[email protected]> * test Signed-off-by: Abhinav Khattar <[email protected]> * test Signed-off-by: Abhinav Khattar <[email protected]> * test Signed-off-by: Abhinav Khattar <[email protected]> * test Signed-off-by: Abhinav Khattar <[email protected]> * extra args Signed-off-by: Abhinav Khattar <[email protected]> * test Signed-off-by: Abhinav Khattar <[email protected]> * rm prints Signed-off-by: Abhinav Khattar <[email protected]> * style Signed-off-by: Abhinav Khattar <[email protected]> * review comments Signed-off-by: Abhinav Khattar <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * review comments Signed-off-by: Abhinav Khattar <[email protected]> * review comments Signed-off-by: Abhinav Khattar <[email protected]> * fix Signed-off-by: Abhinav Khattar <[email protected]> Signed-off-by: Abhinav Khattar <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Fix args (NVIDIA#5410) (NVIDIA#5416) Signed-off-by: MaximumEntropy <[email protected]> Signed-off-by: MaximumEntropy <[email protected]> Signed-off-by: MaximumEntropy <[email protected]> Co-authored-by: Sandeep Subramanian <[email protected]> * Fix for concat map dataset (NVIDIA#5133) * change for concat map dataset * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Exhaust longest dataset * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Co-authored-by: 1-800-BAD-CODE <> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Oleksii Kuchaiev <[email protected]> Co-authored-by: Sandeep Subramanian <[email protected]> * Add temporary fix for CUDA issue in Dockerfile (NVIDIA#5421) (NVIDIA#5422) Signed-off-by: Yu Yao <[email protected]> Signed-off-by: Yu Yao <[email protected]> Signed-off-by: Yu Yao <[email protected]> Co-authored-by: yaoyu-33 <[email protected]> * Fix GPT generation when using sentencepiece tokenizer (NVIDIA#5413) (NVIDIA#5428) * Fix Signed-off-by: MaximumEntropy <[email protected]> * Fix Signed-off-by: MaximumEntropy <[email protected]> Signed-off-by: MaximumEntropy <[email protected]> Co-authored-by: Yi Dong <[email protected]> Co-authored-by: Oleksii Kuchaiev <[email protected]> Signed-off-by: MaximumEntropy <[email protected]> Co-authored-by: Sandeep Subramanian <[email protected]> Co-authored-by: Yi Dong <[email protected]> Co-authored-by: Oleksii Kuchaiev <[email protected]> * Support for finetuning and finetuning inference with .ckpt files & batch size refactoring (NVIDIA#5339) * Initial refactor Signed-off-by: MaximumEntropy <[email protected]> * Resolve config before passing to load_from_checkpoint Signed-off-by: MaximumEntropy <[email protected]> * Fixes for model parallel and nemo restore Signed-off-by: MaximumEntropy <[email protected]> * Fixes for eval Signed-off-by: MaximumEntropy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Revert config changes Signed-off-by: MaximumEntropy <[email protected]> * Refactor Signed-off-by: MaximumEntropy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix typo Signed-off-by: MaximumEntropy <[email protected]> * Remove comments Signed-off-by: MaximumEntropy <[email protected]> * Minor Signed-off-by: MaximumEntropy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix validation reconfiguration Signed-off-by: MaximumEntropy <[email protected]> * Remove old comment Signed-off-by: MaximumEntropy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixes for test_ds Signed-off-by: MaximumEntropy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Signed-off-by: MaximumEntropy <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Revert "Add temporary fix for CUDA issue in Dockerfile (NVIDIA#5421)" (NVIDIA#5431) (NVIDIA#5432) This reverts commit 0718b17. Co-authored-by: yaoyu-33 <[email protected]> * [ITN] fix year date graph, cardinals extension for hundreds (NVIDIA#5435) * wip Signed-off-by: ekmb <[email protected]> * add lociko's hundreds extension for cardinals Signed-off-by: ekmb <[email protected]> * add optional end Signed-off-by: ekmb <[email protected]> * restart ci Signed-off-by: ekmb <[email protected]> Signed-off-by: ekmb <[email protected]> * update doc in terms of get_label for lang id model (NVIDIA#5366) * reflect PR 5278 ion doc Signed-off-by: fayejf <[email protected]> * reflect comment Signed-off-by: fayejf <[email protected]> Signed-off-by: fayejf <[email protected]> * Revert workaround for T5 that sets number of workers to 0 & sync_batch_comm=False (NVIDIA#5420) (NVIDIA#5433) * Revert workers workaround Signed-off-by: MaximumEntropy <[email protected]> * Fix in config Signed-off-by: MaximumEntropy <[email protected]> * Fix Signed-off-by: MaximumEntropy <[email protected]> Signed-off-by: MaximumEntropy <[email protected]> Co-authored-by: Oleksii Kuchaiev <[email protected]> Signed-off-by: MaximumEntropy <[email protected]> Co-authored-by: Sandeep Subramanian <[email protected]> Co-authored-by: Oleksii Kuchaiev <[email protected]> * Fixed bug in notebook (NVIDIA#5382) (NVIDIA#5394) Signed-off-by: Virginia Adams <[email protected]> Signed-off-by: Virginia Adams <[email protected]> Signed-off-by: Virginia Adams <[email protected]> Co-authored-by: Virginia Adams <[email protected]> * Fixing bug in Megatron BERT when loss mask is all zeros (NVIDIA#5424) * Fixing bug when loss mask is fully zero Signed-off-by: Shanmugam Ramasamy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Update megatron_bert_model.py Signed-off-by: Shanmugam Ramasamy <[email protected]> * Update dataset_utils.py Signed-off-by: Shanmugam Ramasamy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Update dataset_utils.py Signed-off-by: Shanmugam Ramasamy <[email protected]> * Update dataset_utils.py Signed-off-by: Shanmugam Ramasamy <[email protected]> Signed-off-by: Shanmugam Ramasamy <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Sandeep Subramanian <[email protected]> * Use updated API for overlapping grad sync with pipeline parallelism (NVIDIA#5236) Signed-off-by: Tim Moon <[email protected]> Signed-off-by: Tim Moon <[email protected]> * support to disable sequence length + 1 input tokens for each sample in MegatronGPT (NVIDIA#5363) * support to disable sequence length + 1 input tokens for MegatronGPT * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Co-authored-by: Anmol Gupta <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Sandeep Subramanian <[email protected]> * [TTS] Create script for processing TTS training audio (NVIDIA#5262) * Create script for processing TTS training audio * Update VAD trimming logic * Remove unused import Signed-off-by: Ryan <[email protected]> * [TTS] remove useless logic for set_tokenizer. (NVIDIA#5430) Signed-off-by: Xuesong Yang <[email protected]> * Fix setting up of `ReduceLROnPlateau` learning rate scheduler (NVIDIA#5444) * Fix tests Signed-off-by: PeganovAnton <[email protected]> * Add accidentally lost changes Signed-off-by: PeganovAnton <[email protected]> Signed-off-by: PeganovAnton <[email protected]> * Create codeql.yml (NVIDIA#5445) Signed-off-by: Somshubra Majumdar <[email protected]> Signed-off-by: Somshubra Majumdar <[email protected]> * Fix for getting tokenizer in character-based ASR models when using tarred dataset (NVIDIA#5442) Signed-off-by: Jonghwan Hyeon <[email protected]> Signed-off-by: Jonghwan Hyeon <[email protected]> * Combine 5 commits adding diar_infer_general.yaml Signed-off-by: Taejin Park <[email protected]> Update codeql.yml Signed-off-by: Somshubra Majumdar <[email protected]> Update codeql.yml Signed-off-by: Somshubra Majumdar <[email protected]> fix msdd_model in general yaml file Signed-off-by: Taejin Park <[email protected]> fixed errors in yaml file Signed-off-by: Taejin Park <[email protected]> * moved eval_der function and fixed tqdm options Signed-off-by: Taejin Park <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Changed minor error in docstrings Signed-off-by: Taejin Park <[email protected]> * removed score_labels and changed leave=True Signed-off-by: Taejin Park <[email protected]> Signed-off-by: Taejin Park <[email protected]> Signed-off-by: Abhinav Khattar <[email protected]> Signed-off-by: MaximumEntropy <[email protected]> Signed-off-by: Yu Yao <[email protected]> Signed-off-by: ekmb <[email protected]> Signed-off-by: fayejf <[email protected]> Signed-off-by: Virginia Adams <[email protected]> Signed-off-by: Shanmugam Ramasamy <[email protected]> Signed-off-by: Tim Moon <[email protected]> Signed-off-by: Ryan <[email protected]> Signed-off-by: Xuesong Yang <[email protected]> Signed-off-by: PeganovAnton <[email protected]> Signed-off-by: Somshubra Majumdar <[email protected]> Signed-off-by: Jonghwan Hyeon <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Abhinav Khattar <[email protected]> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Sandeep Subramanian <[email protected]> Co-authored-by: Shane Carroll <[email protected]> Co-authored-by: Oleksii Kuchaiev <[email protected]> Co-authored-by: yaoyu-33 <[email protected]> Co-authored-by: Yi Dong <[email protected]> Co-authored-by: Evelina <[email protected]> Co-authored-by: fayejf <[email protected]> Co-authored-by: Virginia Adams <[email protected]> Co-authored-by: Shanmugam Ramasamy <[email protected]> Co-authored-by: Tim Moon <[email protected]> Co-authored-by: anmolgupt <[email protected]> Co-authored-by: Anmol Gupta <[email protected]> Co-authored-by: Ryan Langman <[email protected]> Co-authored-by: Xuesong Yang <[email protected]> Co-authored-by: PeganovAnton <[email protected]> Co-authored-by: Somshubra Majumdar <[email protected]> Co-authored-by: Jonghwan Hyeon <[email protected]> Signed-off-by: Hainan Xu <[email protected]>
* Fixing bug when loss mask is fully zero Signed-off-by: Shanmugam Ramasamy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Update megatron_bert_model.py Signed-off-by: Shanmugam Ramasamy <[email protected]> * Update dataset_utils.py Signed-off-by: Shanmugam Ramasamy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Update dataset_utils.py Signed-off-by: Shanmugam Ramasamy <[email protected]> * Update dataset_utils.py Signed-off-by: Shanmugam Ramasamy <[email protected]> Signed-off-by: Shanmugam Ramasamy <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Sandeep Subramanian <[email protected]> Signed-off-by: Hainan Xu <[email protected]>
* first commit on eval_diar_with_asr.py Signed-off-by: Taejin Park <[email protected]> * Add a standalone diarization-ASR evaluation transcript Signed-off-by: Taejin Park <[email protected]> * Fixed examples in docstrings Signed-off-by: Taejin Park <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed staticmethod error Signed-off-by: Taejin Park <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Added description on eval modes Signed-off-by: Taejin Park <[email protected]> * adding diar_infer_general.yaml Signed-off-by: Taejin Park <[email protected]> * fix msdd_model in general yaml file Signed-off-by: Taejin Park <[email protected]> * fixed errors in yaml file Signed-off-by: Taejin Park <[email protected]> * combine into 1 commit Signed-off-by: Taejin Park <[email protected]> * Added description on eval modes Signed-off-by: Taejin Park <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Add MoE support for T5 model (w/o expert parallel) (NVIDIA#5409) * clean Signed-off-by: Abhinav Khattar <[email protected]> * kwarg ref Signed-off-by: Abhinav Khattar <[email protected]> * fix Signed-off-by: Abhinav Khattar <[email protected]> * fix Signed-off-by: Abhinav Khattar <[email protected]> * test Signed-off-by: Abhinav Khattar <[email protected]> * test Signed-off-by: Abhinav Khattar <[email protected]> * test Signed-off-by: Abhinav Khattar <[email protected]> * test Signed-off-by: Abhinav Khattar <[email protected]> * test Signed-off-by: Abhinav Khattar <[email protected]> * test Signed-off-by: Abhinav Khattar <[email protected]> * extra args Signed-off-by: Abhinav Khattar <[email protected]> * test Signed-off-by: Abhinav Khattar <[email protected]> * rm prints Signed-off-by: Abhinav Khattar <[email protected]> * style Signed-off-by: Abhinav Khattar <[email protected]> * review comments Signed-off-by: Abhinav Khattar <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * review comments Signed-off-by: Abhinav Khattar <[email protected]> * review comments Signed-off-by: Abhinav Khattar <[email protected]> * fix Signed-off-by: Abhinav Khattar <[email protected]> Signed-off-by: Abhinav Khattar <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Fix args (NVIDIA#5410) (NVIDIA#5416) Signed-off-by: MaximumEntropy <[email protected]> Signed-off-by: MaximumEntropy <[email protected]> Signed-off-by: MaximumEntropy <[email protected]> Co-authored-by: Sandeep Subramanian <[email protected]> * Fix for concat map dataset (NVIDIA#5133) * change for concat map dataset * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Exhaust longest dataset * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Co-authored-by: 1-800-BAD-CODE <> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Oleksii Kuchaiev <[email protected]> Co-authored-by: Sandeep Subramanian <[email protected]> * Add temporary fix for CUDA issue in Dockerfile (NVIDIA#5421) (NVIDIA#5422) Signed-off-by: Yu Yao <[email protected]> Signed-off-by: Yu Yao <[email protected]> Signed-off-by: Yu Yao <[email protected]> Co-authored-by: yaoyu-33 <[email protected]> * Fix GPT generation when using sentencepiece tokenizer (NVIDIA#5413) (NVIDIA#5428) * Fix Signed-off-by: MaximumEntropy <[email protected]> * Fix Signed-off-by: MaximumEntropy <[email protected]> Signed-off-by: MaximumEntropy <[email protected]> Co-authored-by: Yi Dong <[email protected]> Co-authored-by: Oleksii Kuchaiev <[email protected]> Signed-off-by: MaximumEntropy <[email protected]> Co-authored-by: Sandeep Subramanian <[email protected]> Co-authored-by: Yi Dong <[email protected]> Co-authored-by: Oleksii Kuchaiev <[email protected]> * Support for finetuning and finetuning inference with .ckpt files & batch size refactoring (NVIDIA#5339) * Initial refactor Signed-off-by: MaximumEntropy <[email protected]> * Resolve config before passing to load_from_checkpoint Signed-off-by: MaximumEntropy <[email protected]> * Fixes for model parallel and nemo restore Signed-off-by: MaximumEntropy <[email protected]> * Fixes for eval Signed-off-by: MaximumEntropy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Revert config changes Signed-off-by: MaximumEntropy <[email protected]> * Refactor Signed-off-by: MaximumEntropy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix typo Signed-off-by: MaximumEntropy <[email protected]> * Remove comments Signed-off-by: MaximumEntropy <[email protected]> * Minor Signed-off-by: MaximumEntropy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix validation reconfiguration Signed-off-by: MaximumEntropy <[email protected]> * Remove old comment Signed-off-by: MaximumEntropy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixes for test_ds Signed-off-by: MaximumEntropy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Signed-off-by: MaximumEntropy <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Revert "Add temporary fix for CUDA issue in Dockerfile (NVIDIA#5421)" (NVIDIA#5431) (NVIDIA#5432) This reverts commit 0718b17. Co-authored-by: yaoyu-33 <[email protected]> * [ITN] fix year date graph, cardinals extension for hundreds (NVIDIA#5435) * wip Signed-off-by: ekmb <[email protected]> * add lociko's hundreds extension for cardinals Signed-off-by: ekmb <[email protected]> * add optional end Signed-off-by: ekmb <[email protected]> * restart ci Signed-off-by: ekmb <[email protected]> Signed-off-by: ekmb <[email protected]> * update doc in terms of get_label for lang id model (NVIDIA#5366) * reflect PR 5278 ion doc Signed-off-by: fayejf <[email protected]> * reflect comment Signed-off-by: fayejf <[email protected]> Signed-off-by: fayejf <[email protected]> * Revert workaround for T5 that sets number of workers to 0 & sync_batch_comm=False (NVIDIA#5420) (NVIDIA#5433) * Revert workers workaround Signed-off-by: MaximumEntropy <[email protected]> * Fix in config Signed-off-by: MaximumEntropy <[email protected]> * Fix Signed-off-by: MaximumEntropy <[email protected]> Signed-off-by: MaximumEntropy <[email protected]> Co-authored-by: Oleksii Kuchaiev <[email protected]> Signed-off-by: MaximumEntropy <[email protected]> Co-authored-by: Sandeep Subramanian <[email protected]> Co-authored-by: Oleksii Kuchaiev <[email protected]> * Fixed bug in notebook (NVIDIA#5382) (NVIDIA#5394) Signed-off-by: Virginia Adams <[email protected]> Signed-off-by: Virginia Adams <[email protected]> Signed-off-by: Virginia Adams <[email protected]> Co-authored-by: Virginia Adams <[email protected]> * Fixing bug in Megatron BERT when loss mask is all zeros (NVIDIA#5424) * Fixing bug when loss mask is fully zero Signed-off-by: Shanmugam Ramasamy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Update megatron_bert_model.py Signed-off-by: Shanmugam Ramasamy <[email protected]> * Update dataset_utils.py Signed-off-by: Shanmugam Ramasamy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Update dataset_utils.py Signed-off-by: Shanmugam Ramasamy <[email protected]> * Update dataset_utils.py Signed-off-by: Shanmugam Ramasamy <[email protected]> Signed-off-by: Shanmugam Ramasamy <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Sandeep Subramanian <[email protected]> * Use updated API for overlapping grad sync with pipeline parallelism (NVIDIA#5236) Signed-off-by: Tim Moon <[email protected]> Signed-off-by: Tim Moon <[email protected]> * support to disable sequence length + 1 input tokens for each sample in MegatronGPT (NVIDIA#5363) * support to disable sequence length + 1 input tokens for MegatronGPT * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Co-authored-by: Anmol Gupta <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Sandeep Subramanian <[email protected]> * [TTS] Create script for processing TTS training audio (NVIDIA#5262) * Create script for processing TTS training audio * Update VAD trimming logic * Remove unused import Signed-off-by: Ryan <[email protected]> * [TTS] remove useless logic for set_tokenizer. (NVIDIA#5430) Signed-off-by: Xuesong Yang <[email protected]> * Fix setting up of `ReduceLROnPlateau` learning rate scheduler (NVIDIA#5444) * Fix tests Signed-off-by: PeganovAnton <[email protected]> * Add accidentally lost changes Signed-off-by: PeganovAnton <[email protected]> Signed-off-by: PeganovAnton <[email protected]> * Create codeql.yml (NVIDIA#5445) Signed-off-by: Somshubra Majumdar <[email protected]> Signed-off-by: Somshubra Majumdar <[email protected]> * Fix for getting tokenizer in character-based ASR models when using tarred dataset (NVIDIA#5442) Signed-off-by: Jonghwan Hyeon <[email protected]> Signed-off-by: Jonghwan Hyeon <[email protected]> * Combine 5 commits adding diar_infer_general.yaml Signed-off-by: Taejin Park <[email protected]> Update codeql.yml Signed-off-by: Somshubra Majumdar <[email protected]> Update codeql.yml Signed-off-by: Somshubra Majumdar <[email protected]> fix msdd_model in general yaml file Signed-off-by: Taejin Park <[email protected]> fixed errors in yaml file Signed-off-by: Taejin Park <[email protected]> * moved eval_der function and fixed tqdm options Signed-off-by: Taejin Park <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Changed minor error in docstrings Signed-off-by: Taejin Park <[email protected]> * removed score_labels and changed leave=True Signed-off-by: Taejin Park <[email protected]> Signed-off-by: Taejin Park <[email protected]> Signed-off-by: Abhinav Khattar <[email protected]> Signed-off-by: MaximumEntropy <[email protected]> Signed-off-by: Yu Yao <[email protected]> Signed-off-by: ekmb <[email protected]> Signed-off-by: fayejf <[email protected]> Signed-off-by: Virginia Adams <[email protected]> Signed-off-by: Shanmugam Ramasamy <[email protected]> Signed-off-by: Tim Moon <[email protected]> Signed-off-by: Ryan <[email protected]> Signed-off-by: Xuesong Yang <[email protected]> Signed-off-by: PeganovAnton <[email protected]> Signed-off-by: Somshubra Majumdar <[email protected]> Signed-off-by: Jonghwan Hyeon <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Abhinav Khattar <[email protected]> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Sandeep Subramanian <[email protected]> Co-authored-by: Shane Carroll <[email protected]> Co-authored-by: Oleksii Kuchaiev <[email protected]> Co-authored-by: yaoyu-33 <[email protected]> Co-authored-by: Yi Dong <[email protected]> Co-authored-by: Evelina <[email protected]> Co-authored-by: fayejf <[email protected]> Co-authored-by: Virginia Adams <[email protected]> Co-authored-by: Shanmugam Ramasamy <[email protected]> Co-authored-by: Tim Moon <[email protected]> Co-authored-by: anmolgupt <[email protected]> Co-authored-by: Anmol Gupta <[email protected]> Co-authored-by: Ryan Langman <[email protected]> Co-authored-by: Xuesong Yang <[email protected]> Co-authored-by: PeganovAnton <[email protected]> Co-authored-by: Somshubra Majumdar <[email protected]> Co-authored-by: Jonghwan Hyeon <[email protected]> Signed-off-by: Hainan Xu <[email protected]>
* Fixing bug when loss mask is fully zero Signed-off-by: Shanmugam Ramasamy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Update megatron_bert_model.py Signed-off-by: Shanmugam Ramasamy <[email protected]> * Update dataset_utils.py Signed-off-by: Shanmugam Ramasamy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Update dataset_utils.py Signed-off-by: Shanmugam Ramasamy <[email protected]> * Update dataset_utils.py Signed-off-by: Shanmugam Ramasamy <[email protected]> Signed-off-by: Shanmugam Ramasamy <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Sandeep Subramanian <[email protected]> Signed-off-by: andrusenkoau <[email protected]>
* first commit on eval_diar_with_asr.py Signed-off-by: Taejin Park <[email protected]> * Add a standalone diarization-ASR evaluation transcript Signed-off-by: Taejin Park <[email protected]> * Fixed examples in docstrings Signed-off-by: Taejin Park <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixed staticmethod error Signed-off-by: Taejin Park <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Added description on eval modes Signed-off-by: Taejin Park <[email protected]> * adding diar_infer_general.yaml Signed-off-by: Taejin Park <[email protected]> * fix msdd_model in general yaml file Signed-off-by: Taejin Park <[email protected]> * fixed errors in yaml file Signed-off-by: Taejin Park <[email protected]> * combine into 1 commit Signed-off-by: Taejin Park <[email protected]> * Added description on eval modes Signed-off-by: Taejin Park <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Add MoE support for T5 model (w/o expert parallel) (NVIDIA#5409) * clean Signed-off-by: Abhinav Khattar <[email protected]> * kwarg ref Signed-off-by: Abhinav Khattar <[email protected]> * fix Signed-off-by: Abhinav Khattar <[email protected]> * fix Signed-off-by: Abhinav Khattar <[email protected]> * test Signed-off-by: Abhinav Khattar <[email protected]> * test Signed-off-by: Abhinav Khattar <[email protected]> * test Signed-off-by: Abhinav Khattar <[email protected]> * test Signed-off-by: Abhinav Khattar <[email protected]> * test Signed-off-by: Abhinav Khattar <[email protected]> * test Signed-off-by: Abhinav Khattar <[email protected]> * extra args Signed-off-by: Abhinav Khattar <[email protected]> * test Signed-off-by: Abhinav Khattar <[email protected]> * rm prints Signed-off-by: Abhinav Khattar <[email protected]> * style Signed-off-by: Abhinav Khattar <[email protected]> * review comments Signed-off-by: Abhinav Khattar <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * review comments Signed-off-by: Abhinav Khattar <[email protected]> * review comments Signed-off-by: Abhinav Khattar <[email protected]> * fix Signed-off-by: Abhinav Khattar <[email protected]> Signed-off-by: Abhinav Khattar <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Fix args (NVIDIA#5410) (NVIDIA#5416) Signed-off-by: MaximumEntropy <[email protected]> Signed-off-by: MaximumEntropy <[email protected]> Signed-off-by: MaximumEntropy <[email protected]> Co-authored-by: Sandeep Subramanian <[email protected]> * Fix for concat map dataset (NVIDIA#5133) * change for concat map dataset * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Exhaust longest dataset * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Co-authored-by: 1-800-BAD-CODE <> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Oleksii Kuchaiev <[email protected]> Co-authored-by: Sandeep Subramanian <[email protected]> * Add temporary fix for CUDA issue in Dockerfile (NVIDIA#5421) (NVIDIA#5422) Signed-off-by: Yu Yao <[email protected]> Signed-off-by: Yu Yao <[email protected]> Signed-off-by: Yu Yao <[email protected]> Co-authored-by: yaoyu-33 <[email protected]> * Fix GPT generation when using sentencepiece tokenizer (NVIDIA#5413) (NVIDIA#5428) * Fix Signed-off-by: MaximumEntropy <[email protected]> * Fix Signed-off-by: MaximumEntropy <[email protected]> Signed-off-by: MaximumEntropy <[email protected]> Co-authored-by: Yi Dong <[email protected]> Co-authored-by: Oleksii Kuchaiev <[email protected]> Signed-off-by: MaximumEntropy <[email protected]> Co-authored-by: Sandeep Subramanian <[email protected]> Co-authored-by: Yi Dong <[email protected]> Co-authored-by: Oleksii Kuchaiev <[email protected]> * Support for finetuning and finetuning inference with .ckpt files & batch size refactoring (NVIDIA#5339) * Initial refactor Signed-off-by: MaximumEntropy <[email protected]> * Resolve config before passing to load_from_checkpoint Signed-off-by: MaximumEntropy <[email protected]> * Fixes for model parallel and nemo restore Signed-off-by: MaximumEntropy <[email protected]> * Fixes for eval Signed-off-by: MaximumEntropy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Revert config changes Signed-off-by: MaximumEntropy <[email protected]> * Refactor Signed-off-by: MaximumEntropy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix typo Signed-off-by: MaximumEntropy <[email protected]> * Remove comments Signed-off-by: MaximumEntropy <[email protected]> * Minor Signed-off-by: MaximumEntropy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix validation reconfiguration Signed-off-by: MaximumEntropy <[email protected]> * Remove old comment Signed-off-by: MaximumEntropy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fixes for test_ds Signed-off-by: MaximumEntropy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Signed-off-by: MaximumEntropy <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Revert "Add temporary fix for CUDA issue in Dockerfile (NVIDIA#5421)" (NVIDIA#5431) (NVIDIA#5432) This reverts commit 0718b17. Co-authored-by: yaoyu-33 <[email protected]> * [ITN] fix year date graph, cardinals extension for hundreds (NVIDIA#5435) * wip Signed-off-by: ekmb <[email protected]> * add lociko's hundreds extension for cardinals Signed-off-by: ekmb <[email protected]> * add optional end Signed-off-by: ekmb <[email protected]> * restart ci Signed-off-by: ekmb <[email protected]> Signed-off-by: ekmb <[email protected]> * update doc in terms of get_label for lang id model (NVIDIA#5366) * reflect PR 5278 ion doc Signed-off-by: fayejf <[email protected]> * reflect comment Signed-off-by: fayejf <[email protected]> Signed-off-by: fayejf <[email protected]> * Revert workaround for T5 that sets number of workers to 0 & sync_batch_comm=False (NVIDIA#5420) (NVIDIA#5433) * Revert workers workaround Signed-off-by: MaximumEntropy <[email protected]> * Fix in config Signed-off-by: MaximumEntropy <[email protected]> * Fix Signed-off-by: MaximumEntropy <[email protected]> Signed-off-by: MaximumEntropy <[email protected]> Co-authored-by: Oleksii Kuchaiev <[email protected]> Signed-off-by: MaximumEntropy <[email protected]> Co-authored-by: Sandeep Subramanian <[email protected]> Co-authored-by: Oleksii Kuchaiev <[email protected]> * Fixed bug in notebook (NVIDIA#5382) (NVIDIA#5394) Signed-off-by: Virginia Adams <[email protected]> Signed-off-by: Virginia Adams <[email protected]> Signed-off-by: Virginia Adams <[email protected]> Co-authored-by: Virginia Adams <[email protected]> * Fixing bug in Megatron BERT when loss mask is all zeros (NVIDIA#5424) * Fixing bug when loss mask is fully zero Signed-off-by: Shanmugam Ramasamy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Update megatron_bert_model.py Signed-off-by: Shanmugam Ramasamy <[email protected]> * Update dataset_utils.py Signed-off-by: Shanmugam Ramasamy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Update dataset_utils.py Signed-off-by: Shanmugam Ramasamy <[email protected]> * Update dataset_utils.py Signed-off-by: Shanmugam Ramasamy <[email protected]> Signed-off-by: Shanmugam Ramasamy <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Sandeep Subramanian <[email protected]> * Use updated API for overlapping grad sync with pipeline parallelism (NVIDIA#5236) Signed-off-by: Tim Moon <[email protected]> Signed-off-by: Tim Moon <[email protected]> * support to disable sequence length + 1 input tokens for each sample in MegatronGPT (NVIDIA#5363) * support to disable sequence length + 1 input tokens for MegatronGPT * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Co-authored-by: Anmol Gupta <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Sandeep Subramanian <[email protected]> * [TTS] Create script for processing TTS training audio (NVIDIA#5262) * Create script for processing TTS training audio * Update VAD trimming logic * Remove unused import Signed-off-by: Ryan <[email protected]> * [TTS] remove useless logic for set_tokenizer. (NVIDIA#5430) Signed-off-by: Xuesong Yang <[email protected]> * Fix setting up of `ReduceLROnPlateau` learning rate scheduler (NVIDIA#5444) * Fix tests Signed-off-by: PeganovAnton <[email protected]> * Add accidentally lost changes Signed-off-by: PeganovAnton <[email protected]> Signed-off-by: PeganovAnton <[email protected]> * Create codeql.yml (NVIDIA#5445) Signed-off-by: Somshubra Majumdar <[email protected]> Signed-off-by: Somshubra Majumdar <[email protected]> * Fix for getting tokenizer in character-based ASR models when using tarred dataset (NVIDIA#5442) Signed-off-by: Jonghwan Hyeon <[email protected]> Signed-off-by: Jonghwan Hyeon <[email protected]> * Combine 5 commits adding diar_infer_general.yaml Signed-off-by: Taejin Park <[email protected]> Update codeql.yml Signed-off-by: Somshubra Majumdar <[email protected]> Update codeql.yml Signed-off-by: Somshubra Majumdar <[email protected]> fix msdd_model in general yaml file Signed-off-by: Taejin Park <[email protected]> fixed errors in yaml file Signed-off-by: Taejin Park <[email protected]> * moved eval_der function and fixed tqdm options Signed-off-by: Taejin Park <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Changed minor error in docstrings Signed-off-by: Taejin Park <[email protected]> * removed score_labels and changed leave=True Signed-off-by: Taejin Park <[email protected]> Signed-off-by: Taejin Park <[email protected]> Signed-off-by: Abhinav Khattar <[email protected]> Signed-off-by: MaximumEntropy <[email protected]> Signed-off-by: Yu Yao <[email protected]> Signed-off-by: ekmb <[email protected]> Signed-off-by: fayejf <[email protected]> Signed-off-by: Virginia Adams <[email protected]> Signed-off-by: Shanmugam Ramasamy <[email protected]> Signed-off-by: Tim Moon <[email protected]> Signed-off-by: Ryan <[email protected]> Signed-off-by: Xuesong Yang <[email protected]> Signed-off-by: PeganovAnton <[email protected]> Signed-off-by: Somshubra Majumdar <[email protected]> Signed-off-by: Jonghwan Hyeon <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Abhinav Khattar <[email protected]> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Sandeep Subramanian <[email protected]> Co-authored-by: Shane Carroll <[email protected]> Co-authored-by: Oleksii Kuchaiev <[email protected]> Co-authored-by: yaoyu-33 <[email protected]> Co-authored-by: Yi Dong <[email protected]> Co-authored-by: Evelina <[email protected]> Co-authored-by: fayejf <[email protected]> Co-authored-by: Virginia Adams <[email protected]> Co-authored-by: Shanmugam Ramasamy <[email protected]> Co-authored-by: Tim Moon <[email protected]> Co-authored-by: anmolgupt <[email protected]> Co-authored-by: Anmol Gupta <[email protected]> Co-authored-by: Ryan Langman <[email protected]> Co-authored-by: Xuesong Yang <[email protected]> Co-authored-by: PeganovAnton <[email protected]> Co-authored-by: Somshubra Majumdar <[email protected]> Co-authored-by: Jonghwan Hyeon <[email protected]> Signed-off-by: andrusenkoau <[email protected]>
Signed-off-by: Shanmugam Ramasamy [email protected]
What does this PR do ?
Add a one line overview of what this PR aims to accomplish.
Collection: [Note which collection this PR will affect]
Changelog
Usage
# Add a code snippet demonstrating how to use this
Before your PR is "Ready for review"
Pre checks:
PR Type:
If you haven't finished some of the above items you can still open "Draft" PR.
Who can review?
Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.
Additional Information