-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[TTS] Add cosine distance option to TTS aligner #6806
Conversation
This PR is stale because it has been open for 14 days with no activity. Remove stale label or comment or update or this will be closed in 7 days. |
This PR is stale because it has been open for 14 days with no activity. Remove stale label or comment or update or this will be closed in 7 days. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Added some neat-picks.
One discussion: I observed many @staticmethod
member functions defined in class AlignmentEncoder
. I wonder if nemo.collections.tts.parts.utils.helpers
is a better place to hold them all?
Signed-off-by: Ryan <[email protected]>
Signed-off-by: Ryan <[email protected]>
* [TTS] Add cosine distance option to TTS aligner Signed-off-by: Ryan <[email protected]> * [TTS] Update aligner comments Signed-off-by: Ryan <[email protected]> --------- Signed-off-by: Ryan <[email protected]> Signed-off-by: Gerald Shen <[email protected]>
* [TTS] Add cosine distance option to TTS aligner Signed-off-by: Ryan <[email protected]> * [TTS] Update aligner comments Signed-off-by: Ryan <[email protected]> --------- Signed-off-by: Ryan <[email protected]> Signed-off-by: Gerald Shen <[email protected]>
* Add end_strings to SamplingParams Signed-off-by: Gerald Shen <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Signed-off-by: Gerald Shen <[email protected]> * Add end_strings to megatron_gpt_inference.yaml Signed-off-by: Gerald Shen <[email protected]> * Add end_strings to sampling params Signed-off-by: Gerald Shen <[email protected]> * Remove extra_id_1 from default end_strings Signed-off-by: Gerald Shen <[email protected]> * Fix require_grad typos (#6930) Signed-off-by: Sergii Dymchenko <[email protected]> Signed-off-by: Gerald Shen <[email protected]> * fix syntax error Signed-off-by: Gerald Shen <[email protected]> * fix the mpt chatbot (#6957) (#6968) Signed-off-by: Yi Dong <[email protected]> Co-authored-by: Yi Dong <[email protected]> Signed-off-by: Gerald Shen <[email protected]> * add support for max_total_length=4096 for 43b (#6763) * add support for max_total_length=4096 for 43b Signed-off-by: Zhilin Wang <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: Zhilin Wang <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Gerald Shen <[email protected]> * rnnt_greedy_decoding.py: typos? auto-repressively -> auto-regressively (#6989) Signed-off-by: Vadim Kantorov <[email protected]> Signed-off-by: Gerald Shen <[email protected]> * Cache handling without input tensors mutation (#6980) (#6996) * Cache handling without input tensors mutation * Cleanup * Cleanup#2 * Cleanup#3 --------- Signed-off-by: Boris Fomitchev <[email protected]> Co-authored-by: Boris Fomitchev <[email protected]> Co-authored-by: Somshubra Majumdar <[email protected]> Signed-off-by: Gerald Shen <[email protected]> * Hybrid conformer export (#6983) (#6995) * Implemented generic kv-pair setting of export_config from args * Hybrid conformer export * Hybrid decoder export * Cleanup * Changed from **kwargs * Docstring * Docs added * Stringify args * Added docs for ASR export configs * lowercase ctc --------- Signed-off-by: Boris Fomitchev <[email protected]> Co-authored-by: Boris Fomitchev <[email protected]> Signed-off-by: Gerald Shen <[email protected]> * Fixing an issue with confidence ensembles (#6987) (#7004) * Bug fix for the confidence ensembles * Relax constraints for the test --------- Signed-off-by: Igor Gitman <[email protected]> Co-authored-by: Igor Gitman <[email protected]> Signed-off-by: Gerald Shen <[email protected]> * [TTS] Add cosine distance option to TTS aligner (#6806) * [TTS] Add cosine distance option to TTS aligner Signed-off-by: Ryan <[email protected]> * [TTS] Update aligner comments Signed-off-by: Ryan <[email protected]> --------- Signed-off-by: Ryan <[email protected]> Signed-off-by: Gerald Shen <[email protected]> * Minor MPT-7B fixes and creation script update (#6982) * Initial commit of minor MPT-7B fixes Signed-off-by: Daniel Egert <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: Daniel Egert <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Gerald Shen <[email protected]> * Change Jenkins timeout (#6997) * change timeout Signed-off-by: ericharper <[email protected]> * change to 8 hours Signed-off-by: ericharper <[email protected]> --------- Signed-off-by: ericharper <[email protected]> Signed-off-by: Gerald Shen <[email protected]> * remove hard coded input and output fields (#7008) * remove hard coded input and output fields Signed-off-by: arendu <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: arendu <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Gerald Shen <[email protected]> * RoPE length extrapolation with interpolation (#7005) * Push changes Signed-off-by: MaximumEntropy <[email protected]> * Fixes Signed-off-by: MaximumEntropy <[email protected]> * add continue training script Signed-off-by: MaximumEntropy <[email protected]> * [WIP] nonlinear interp Signed-off-by: MaximumEntropy <[email protected]> * Fix Signed-off-by: MaximumEntropy <[email protected]> * override encoder_seq_len Signed-off-by: MaximumEntropy <[email protected]> * Remove nonlinear Signed-off-by: MaximumEntropy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * sft with pi (#7006) * sft with pi Signed-off-by: Evelina <[email protected]> * update values only if not None" Signed-off-by: Evelina <[email protected]> --------- Signed-off-by: Evelina <[email protected]> * Address comments Signed-off-by: MaximumEntropy <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Add info Signed-off-by: MaximumEntropy <[email protected]> * Empty Signed-off-by: MaximumEntropy <[email protected]> --------- Signed-off-by: MaximumEntropy <[email protected]> Signed-off-by: Evelina <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Evelina <[email protected]> Signed-off-by: Gerald Shen <[email protected]> * use proper config Signed-off-by: Gerald Shen <[email protected]> * Add end_strings to SamplingParams Signed-off-by: Gerald Shen <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Signed-off-by: Gerald Shen <[email protected]> * Add end_strings to megatron_gpt_inference.yaml Signed-off-by: Gerald Shen <[email protected]> * Add end_strings to sampling params Signed-off-by: Gerald Shen <[email protected]> * Remove extra_id_1 from default end_strings Signed-off-by: Gerald Shen <[email protected]> * fix syntax error Signed-off-by: Gerald Shen <[email protected]> * use proper config Signed-off-by: Gerald Shen <[email protected]> --------- Signed-off-by: Gerald Shen <[email protected]> Signed-off-by: Sergii Dymchenko <[email protected]> Signed-off-by: Yi Dong <[email protected]> Signed-off-by: Zhilin Wang <[email protected]> Signed-off-by: Vadim Kantorov <[email protected]> Signed-off-by: Boris Fomitchev <[email protected]> Signed-off-by: Igor Gitman <[email protected]> Signed-off-by: Ryan <[email protected]> Signed-off-by: Daniel Egert <[email protected]> Signed-off-by: ericharper <[email protected]> Signed-off-by: arendu <[email protected]> Signed-off-by: MaximumEntropy <[email protected]> Signed-off-by: Evelina <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Sergii Dymchenko <[email protected]> Co-authored-by: Gerald Shen <[email protected]> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Yi Dong <[email protected]> Co-authored-by: Zhilin Wang <[email protected]> Co-authored-by: Vadim Kantorov <[email protected]> Co-authored-by: Boris Fomitchev <[email protected]> Co-authored-by: Somshubra Majumdar <[email protected]> Co-authored-by: Igor Gitman <[email protected]> Co-authored-by: Ryan Langman <[email protected]> Co-authored-by: trias702 <[email protected]> Co-authored-by: Eric Harper <[email protected]> Co-authored-by: Adi Renduchintala <[email protected]> Co-authored-by: Sandeep Subramanian <[email protected]> Co-authored-by: Evelina <[email protected]>
What does this PR do ?
Add configuration to TTS aligner which uses scaled cosine distance instead of euclidean distance.
Collection: [TTS]
Changelog
Before your PR is "Ready for review"
Pre checks:
PR Type:
Additional Information
The current aligner used for FastPitch training has a few problem:
Using cosine distance instead of L2 distance can fix these problems. Cosine distance in general a lot more accurate and stable when it comes to similarity problems. It also avoids possible issues with the binarization loss because the scale of the cosine distance is fixed regardless of the scale or content of the alignment embeddings.
To test this I trained FastPitch on the VCTK dataset for 300k steps and provide the alignments and loss graphs below.
As seen, using cosine distance aligns faster, produces more accurate and sharper alignments, has a lot lower alignment loss (especially binarization loss), and as a result converges to a model with significantly better accuracy all around (mel, duration, pitch, energy). Though the gap between the two would likely shrink if we train it for several million steps.
Alignments
Example 1
L2
Cosine
Example 2
L2
Cosine
Alignment Losses
Reconstruction Losses