Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Help] Share your TTS models #380

Closed
erogol opened this issue Mar 15, 2021 · 26 comments
Closed

[Help] Share your TTS models #380

erogol opened this issue Mar 15, 2021 · 26 comments
Labels
help wanted Contributions welcome!!

Comments

@erogol
Copy link
Member

erogol commented Mar 15, 2021

Please consider sharing your pre-trained models in any language (If the licences allow that).

We can include them in our model catalogue for public use by attributing your name (website, company etc.).

That would enable more people to experiment together and coordinate, instead of individual efforts to achieve similar goals.

That is also a chance to make your work more visible.

You can share in two ways;

  1. Share the model files with us and we serve them with the next 🐸 TTS release.
  2. Upload your models on GDrive and share the link.

Models are served under .models.json file and any model is available under tts CLI or Server end points. More details...

(previously mozilla/TTS#395)

@erogol erogol added the help wanted Contributions welcome!! label Mar 15, 2021
@erogol erogol pinned this issue Mar 15, 2021
@coqui-ai coqui-ai deleted a comment from snakers4 Apr 2, 2021
@enjikaka
Copy link

Any ELI5 tutorial/doc for creating a dataset for your own language/dialect?

@erogol
Copy link
Member Author

erogol commented Apr 15, 2021

Not sure if it is ELI5, but there is this link https://github.com/coqui-ai/TTS/wiki/What-makes-a-good-TTS-dataset

Also, @thorstenMueller has created a TTS dataset from the gecko so he might have valuable comments if you have specific questions.

@thorstenMueller
Copy link
Contributor

Feel free to ask specific question. I'd happy to share my experiences on recording a new dataset here.

  • Find/Create a text corpus to record (one sentence = 1 recording)
  • Replace numbers to text
  • Create csv file from corpus
  • Check Mimic-Recording-Studio from Mycroft as recording environment (https://github.com/MycroftAI/mimic-recording-studio)
  • Start recording
    • Constant speed while recordings
    • Speak all chars clearly
    • Speak in neutral voice
    • Use good microphone equipment
    • Find a recording place without random noise

@Sadam1195
Copy link
Contributor

Hi @erogol , thank you for the amazing work, from Mozilla TTS to coqui-ai. Although Mozilla seemed perfect to me as it had wider community reach, just hope this grows even wider and faster than Mozilla. I am planning to share my models for Spanish and Italian using (Taco2 600k steps + WaveRNN). Audio quality seems to be good but I need to train it a bit more and also ask dataset providers if that would be okay if I make the models public.
Fingers crossed.

Let me know if I can contribute in any way I have Google Colab Pro resources laying around free.

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.67 Driver Version: 460.32.03 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla V100-SXM2... Off | 00000000:00:04.0 Off | 0 |
| N/A 35C P0 24W / 300W | 0MiB / 16160MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+

@erogol
Copy link
Member Author

erogol commented Apr 21, 2021

@Sadam1195 thx for the amazing work 🚀🚀.

I really hope we can include your models, of course with the right attribution going to you.

Just waiting for your signal.

For general contribution, this is a nice place to start https://github.com/coqui-ai/TTS/blob/main/CONTRIBUTING.md

If you just like to train models, let me know we can also find new datasets to attack.

@Sadam1195
Copy link
Contributor

Sadam1195 commented Apr 21, 2021

I really hope we can include your models, of course with the right attribution going to you.
I hope they allow me, otherwise I would see it as wasting my time and effort.
Just waiting for your signal.
I will let you know when I get the confirmation.
If you just like to train models, let me know we can also find new datasets to attack.
Training models on colab can be a bit annoying as sessions often get disconnected even with all the tricks in the book.

Nonetheless, I would love to train model on new datasets (if you have any) specially in the languages in which TTS models haven't been made public yet.

@kaiidams
Copy link
Contributor

Hello,

I've just started to train a public domain Japanese dataset https://github.com/kaiidams/Kokoro-Speech-Dataset with Tacotron 2 of the latest master of https://github.com/mozilla/TTS on Google Colab Free. After 19K steps, I can hear what he says, although it is metallic.

To proceed, I'd like to know which branch and repo do you recommend for me to use? https://github.com/erogol/TTS_recipes seems a bit old.

@Sadam1195
Copy link
Contributor

Sadam1195 commented May 16, 2021

To proceed, I'd like to know which branch and repo do you recommend for me to use? https://github.com/erogol/TTS_recipes seems a bit old.

Please use this https://github.com/coqui-ai/TTS instead of https://github.com/mozilla/TTS and use the latest main branch. @kaiidams

@kaiidams
Copy link
Contributor

kaiidams commented May 21, 2021

@Sadam1195 @erogol

I trained Tacotron 2 for 130K steps with this code https://github.com/kaiidams/TTS/tree/kaiidams/kokoro which was forked from the latest main.
https://drive.google.com/drive/folders/1-1_HB-ogmvD-qYaHm8D5Xp1pWq9HKhB_?usp=sharing
The included sample.wav was generated with vocoder_models/universal/libri-tts/wavegrad.

The input of the model is Romanized Japanese text. It requires some dependencies like MeCab to convert texts from ordinary ones.
The dataset is the public domain and the reader knows about the dataset. I think I can provide Python code for text conversion.

@erogol
Copy link
Member Author

erogol commented May 21, 2021

@kaiidams if you can send a PR for text conversion something similar to the Chinese API we have, with the model, would be a great contribution.

@zubairahmed-ai
Copy link

Feel free to ask specific question. I'd happy to share my experiences on recording a new dataset here.

  • Find/Create a text corpus to record (one sentence = 1 recording)

  • Replace numbers to text

  • Create csv file from corpus

  • Check Mimic-Recording-Studio from Mycroft as recording environment (https://github.com/MycroftAI/mimic-recording-studio)

  • Start recording

    • Constant speed while recordings
    • Speak all chars clearly
    • Speak in neutral voice
    • Use good microphone equipment
    • Find a recording place without random noise

Any reason why this and this isn't in the readme?
I had to look up training to reach here

@thorstenMueller
Copy link
Contributor

Hi @zubairahmed-ai.
Here's a talk a made on how to record a voice dataset if that's helpful for you.

https://youtu.be/m-Uwb-Bg144

@zubairahmed-ai
Copy link

@thorstenMueller Perfect timing, thank you

@zubairahmed-ai
Copy link

Oh just realized this talk happened during recent Google I/O and I somehow didn't catch it while watching other videos :)

@zubairahmed-ai
Copy link

@thorstenMueller Thanks so much for the great video explaining your process in details with some tips. I'll make sure I follow that, do you plan to give a try to other models besides Tacotron-2? like Align-TTS?

@thorstenMueller
Copy link
Contributor

You're welcome @zubairahmed-ai :-).
I'm currently finishing some recording stuff for my emotional dataset and train a Fullband-MelGAN vocoder. So i've no time left to look at other models like Align-TTS. But feel free to train a "Thorsten" model with Align-TTS ;-).

@stale
Copy link

stale bot commented Jul 10, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. You might also look our discussion channels.

@stale stale bot added the wontfix This will not be worked on but feel free to help. label Jul 10, 2021
@erogol erogol removed the wontfix This will not be worked on but feel free to help. label Jul 11, 2021
@ravi-maithrey
Copy link
Contributor

Asking people to share their models can also be added to the CONTRIBUTING.md, since it is asking for contributions. I'd be up to doing that, if no one has taken it up yet?

@erogol
Copy link
Member Author

erogol commented Jul 14, 2021

yeah good point. Feel free to take on it.

@stale stale bot added the wontfix This will not be worked on but feel free to help. label Aug 13, 2021
@coqui-ai coqui-ai deleted a comment from stale bot Aug 15, 2021
@stale stale bot removed the wontfix This will not be worked on but feel free to help. label Aug 15, 2021
@stale stale bot added the wontfix This will not be worked on but feel free to help. label Sep 14, 2021
@coqui-ai coqui-ai deleted a comment from stale bot Sep 15, 2021
@stale stale bot removed the wontfix This will not be worked on but feel free to help. label Sep 15, 2021
@ghost
Copy link

ghost commented Sep 21, 2021

I would like to contribute my own model.. but I stuck in middle.. I have created dataset(LJSpeech) of my own voice . For training my model I need config.json file , so can anyone provide me the template of config.json file for LJSpeech dataset format required to train my model.

Thanks in Advance

@erogol
Copy link
Member Author

erogol commented Sep 21, 2021

@ManoBharathi93 you can start from the LJSpeech recipes in the recipes folder and change the config fields for your dataset specs. You can find more info here https://tts.readthedocs.io/en/latest/

@ghost
Copy link

ghost commented Sep 21, 2021

@erogol thanks a lot sir

@ghost
Copy link

ghost commented Sep 22, 2021

Hello folks, How can I add drop-down Menu to list available models(downloaded models) in WEB-UI and when I change the server.py file the web interface is not changing ? please mention which file name want to make changes impact in WEB-UI..

@godspirit00
Copy link

godspirit00 commented Oct 8, 2021

I'd like to share a Tacotron2-DCA model and a Univnet model I trained on the Nancy corpus.

Here is a sample:

sample.mp4

The link to the models:
https://drive.google.com/drive/folders/1bMNOjjYxcCkgwkcYAlsPR3qM4hZQzAOR?usp=sharing

Thanks again for the great work!

@erogol
Copy link
Member Author

erogol commented Oct 9, 2021

@godspirit00 the quality is awesome.

@stale
Copy link

stale bot commented Nov 8, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. You might also look our discussion channels.

@stale stale bot added the wontfix This will not be worked on but feel free to help. label Nov 8, 2021
@coqui-ai coqui-ai locked and limited conversation to collaborators Nov 10, 2021
@erogol erogol closed this as completed Nov 10, 2021
@stale stale bot removed the wontfix This will not be worked on but feel free to help. label Nov 10, 2021

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
help wanted Contributions welcome!!
Projects
None yet
Development

No branches or pull requests

8 participants