-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Evaluation of ASR with LM increases WER #629
Comments
Yes, I used the jasper_eval script from the documentation and there is a grid search over alphas and betas included. |
dcurran90
pushed a commit
to dcurran90/NeMo
that referenced
this issue
Oct 15, 2024
Use a single jinja template for the prompts with and without a document. Also remove the conditionals checking for te presence of a document. Fixes NVIDIA#629 Signed-off-by: Derek Higgins <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi, I use the pretrained Quartznet checkpoint and adapt it to custom data. After some training, the model reaches around 27% WER. Then, when I use KenLM for beam search rescoring, the WER increases by approx. 5%. I tried the KenLM pretrained on LibriSpeech as well as KenLM trained on my data. The former was slightly worse than the latter, but both increased WER. WHat could be the possible causes? I checked if the models use the same alphabet etc., but could not find a reasonable answer.
The text was updated successfully, but these errors were encountered: