-
Notifications
You must be signed in to change notification settings - Fork 308
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Phone based LF-MMI training #19
base: master
Are you sure you want to change the base?
Conversation
Here are the results I have so far with this pull request: HLG decoding (1best, no LM rescoring)( with model averaging from
HLG decoding (1best) + 4-gram LM rescoring( with model averaging from
The plans for the following days are: (1) Training with attention decoder. Unlike training with BPE units, where a word has only one pronunciation, there may have multiple pronunciations (2) Instead of training a TDNN-LSTM model as a force alignment model, integrate the changes from lhotse (3) Replace phone-based MMI training with BPE based MMI training |
OK, that's great. |
Now it supports using attention-decoder along with MMI training. Tensorboard log of the below command is available at https://tensorboard.dev/experiment/Wd049TyrRdyvOkcOiD32FQ/#scalars&_smoothingWeight=0
|
Wow, great progress! |
Phone based LF-MMI training is easier than wordpiece based LF-MMI training,
so I would like to get a working version of MMI training based on phone first.