You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have trained some Faroese voices using piper_train. The voices sound actually great, but the number-pronounciation rules that were present in the espeak-ng version that I trained on, are really not being followed when I use the .onnx file that comes out of it.
I have included the Faroese language in the espeak-ng repository, but there has not been an official release for it yet, so in order to train my voice in Faroese, I had to use a custom installation of espeak-ng from a fresh clone, in the docker container, where I ran the training.
So, since that piper_phonemize produces the correct faroese phonemes for the dataset/training, and espeak-ng has the correct rules during training, I'm thinking that the explanation must be that during the inference-moment, when I get the piper.exe file to produce audio/speech from text, it must be using espeak-ng.dll under the hood, for inference. Is that so? Do I need to compile a custom espeak-ng.dll to make it work for faroese? And of course, dll's are for windows - what is the equivalent for linux?
Hope my question makes sense :)
The text was updated successfully, but these errors were encountered:
I have trained some Faroese voices using piper_train. The voices sound actually great, but the number-pronounciation rules that were present in the espeak-ng version that I trained on, are really not being followed when I use the .onnx file that comes out of it.
I have included the Faroese language in the espeak-ng repository, but there has not been an official release for it yet, so in order to train my voice in Faroese, I had to use a custom installation of espeak-ng from a fresh clone, in the docker container, where I ran the training.
So, since that piper_phonemize produces the correct faroese phonemes for the dataset/training, and espeak-ng has the correct rules during training, I'm thinking that the explanation must be that during the inference-moment, when I get the piper.exe file to produce audio/speech from text, it must be using espeak-ng.dll under the hood, for inference. Is that so? Do I need to compile a custom espeak-ng.dll to make it work for faroese? And of course, dll's are for windows - what is the equivalent for linux?
Hope my question makes sense :)
The text was updated successfully, but these errors were encountered: