Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation of ASR with LM increases WER #629

Closed
janvainer opened this issue May 13, 2020 · 3 comments
Closed

Evaluation of ASR with LM increases WER #629

janvainer opened this issue May 13, 2020 · 3 comments

Comments

@janvainer
Copy link

Hi, I use the pretrained Quartznet checkpoint and adapt it to custom data. After some training, the model reaches around 27% WER. Then, when I use KenLM for beam search rescoring, the WER increases by approx. 5%. I tried the KenLM pretrained on LibriSpeech as well as KenLM trained on my data. The former was slightly worse than the latter, but both increased WER. WHat could be the possible causes? I checked if the models use the same alphabet etc., but could not find a reasonable answer.

@vsl9
Copy link
Collaborator

vsl9 commented May 13, 2020

Hi, have you tried to tune alpha, beta parameters?

@janvainer
Copy link
Author

Yes, I used the jasper_eval script from the documentation and there is a grid search over alphas and betas included.

@janvainer janvainer reopened this May 14, 2020
@dangvansam
Copy link

Hi, have you tried to tune alpha, beta parameters?

how i tune it?

dcurran90 pushed a commit to dcurran90/NeMo that referenced this issue Oct 15, 2024
Use a single jinja template for the prompts with and
without a document. Also remove the conditionals
checking for te presence of a document.

Fixes NVIDIA#629

Signed-off-by: Derek Higgins <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants