forked from jaywalnut310/vits
-
Notifications
You must be signed in to change notification settings - Fork 196
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #1 from YumeAyai/main
Add Dockerfile
- Loading branch information
Showing
4 changed files
with
128 additions
and
73 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,26 @@ | ||
**/__pycache__ | ||
**/.venv | ||
**/.classpath | ||
**/.dockerignore | ||
**/.env | ||
**/.git | ||
**/.gitignore | ||
**/.project | ||
**/.settings | ||
**/.toolstarget | ||
**/.vs | ||
**/.vscode | ||
**/*.*proj.user | ||
**/*.dbmdl | ||
**/*.jfm | ||
**/bin | ||
**/charts | ||
**/docker-compose* | ||
**/compose* | ||
**/Dockerfile* | ||
**/node_modules | ||
**/npm-debug.log | ||
**/obj | ||
**/secrets.dev.yaml | ||
**/values.dev.yaml | ||
README.md |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,21 @@ | ||
# For more information, please refer to https://aka.ms/vscode-docker-python | ||
FROM python:3.7-slim | ||
|
||
# Keeps Python from generating .pyc files in the container | ||
ENV PYTHONDONTWRITEBYTECODE=1 | ||
# Turns off buffering for easier container logging | ||
ENV PYTHONUNBUFFERED=1 | ||
|
||
# Install pip requirements | ||
COPY requirements.txt . | ||
RUN apt-get update | ||
RUN apt-get install -y vim | ||
RUN apt-get install -y gcc | ||
RUN apt-get install -y g++ | ||
RUN apt-get install -y cmake | ||
RUN python -m pip install -r requirements.txt | ||
|
||
WORKDIR /content | ||
COPY . /content | ||
|
||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,58 +1,65 @@ | ||
# VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech | ||
|
||
### Jaehyeon Kim, Jungil Kong, and Juhee Son | ||
|
||
In our recent [paper](https://arxiv.org/abs/2106.06103), we propose VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech. | ||
|
||
Several recent end-to-end text-to-speech (TTS) models enabling single-stage training and parallel sampling have been proposed, but their sample quality does not match that of two-stage TTS systems. In this work, we present a parallel end-to-end TTS method that generates more natural sounding audio than current two-stage models. Our method adopts variational inference augmented with normalizing flows and an adversarial training process, which improves the expressive power of generative modeling. We also propose a stochastic duration predictor to synthesize speech with diverse rhythms from input text. With the uncertainty modeling over latent variables and the stochastic duration predictor, our method expresses the natural one-to-many relationship in which a text input can be spoken in multiple ways with different pitches and rhythms. A subjective human evaluation (mean opinion score, or MOS) on the LJ Speech, a single speaker dataset, shows that our method outperforms the best publicly available TTS systems and achieves a MOS comparable to ground truth. | ||
|
||
Visit our [demo](https://jaywalnut310.github.io/vits-demo/index.html) for audio samples. | ||
|
||
We also provide the [pretrained models](https://drive.google.com/drive/folders/1ksarh-cJf3F5eKJjLVWY0X1j1qsQqiS2?usp=sharing). | ||
|
||
** Update note: Thanks to [Rishikesh (ऋषिकेश)](https://github.com/jaywalnut310/vits/issues/1), our interactive TTS demo is now available on [Colab Notebook](https://colab.research.google.com/drive/1CO61pZizDj7en71NQG_aqqKdGaA_SaBf?usp=sharing). | ||
|
||
<table style="width:100%"> | ||
<tr> | ||
<th>VITS at training</th> | ||
<th>VITS at inference</th> | ||
</tr> | ||
<tr> | ||
<td><img src="resources/fig_1a.png" alt="VITS at training" height="400"></td> | ||
<td><img src="resources/fig_1b.png" alt="VITS at inference" height="400"></td> | ||
</tr> | ||
</table> | ||
|
||
|
||
## Pre-requisites | ||
0. Python >= 3.6 | ||
0. Clone this repository | ||
0. Install python requirements. Please refer [requirements.txt](requirements.txt) | ||
1. You may need to install espeak first: `apt-get install espeak` | ||
0. Download datasets | ||
1. Download and extract the LJ Speech dataset, then rename or create a link to the dataset folder: `ln -s /path/to/LJSpeech-1.1/wavs DUMMY1` | ||
1. For mult-speaker setting, download and extract the VCTK dataset, and downsample wav files to 22050 Hz. Then rename or create a link to the dataset folder: `ln -s /path/to/VCTK-Corpus/downsampled_wavs DUMMY2` | ||
0. Build Monotonic Alignment Search and run preprocessing if you use your own datasets. | ||
```sh | ||
# Cython-version Monotonoic Alignment Search | ||
cd monotonic_align | ||
python setup.py build_ext --inplace | ||
|
||
# Preprocessing (g2p) for your own datasets. Preprocessed phonemes for LJ Speech and VCTK have been already provided. | ||
# python preprocess.py --text_index 1 --filelists filelists/ljs_audio_text_train_filelist.txt filelists/ljs_audio_text_val_filelist.txt filelists/ljs_audio_text_test_filelist.txt | ||
# python preprocess.py --text_index 2 --filelists filelists/vctk_audio_sid_text_train_filelist.txt filelists/vctk_audio_sid_text_val_filelist.txt filelists/vctk_audio_sid_text_test_filelist.txt | ||
``` | ||
|
||
|
||
## Training Exmaple | ||
```sh | ||
# LJ Speech | ||
python train.py -c configs/ljs_base.json -m ljs_base | ||
|
||
# VCTK | ||
python train_ms.py -c configs/vctk_base.json -m vctk_base | ||
``` | ||
|
||
|
||
## Inference Example | ||
See [inference.ipynb](inference.ipynb) | ||
# VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech | ||
|
||
### Jaehyeon Kim, Jungil Kong, and Juhee Son | ||
|
||
In our recent [paper](https://arxiv.org/abs/2106.06103), we propose VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech. | ||
|
||
Several recent end-to-end text-to-speech (TTS) models enabling single-stage training and parallel sampling have been proposed, but their sample quality does not match that of two-stage TTS systems. In this work, we present a parallel end-to-end TTS method that generates more natural sounding audio than current two-stage models. Our method adopts variational inference augmented with normalizing flows and an adversarial training process, which improves the expressive power of generative modeling. We also propose a stochastic duration predictor to synthesize speech with diverse rhythms from input text. With the uncertainty modeling over latent variables and the stochastic duration predictor, our method expresses the natural one-to-many relationship in which a text input can be spoken in multiple ways with different pitches and rhythms. A subjective human evaluation (mean opinion score, or MOS) on the LJ Speech, a single speaker dataset, shows that our method outperforms the best publicly available TTS systems and achieves a MOS comparable to ground truth. | ||
|
||
Visit our [demo](https://jaywalnut310.github.io/vits-demo/index.html) for audio samples. | ||
|
||
We also provide the [pretrained models](https://drive.google.com/drive/folders/1ksarh-cJf3F5eKJjLVWY0X1j1qsQqiS2?usp=sharing). | ||
|
||
** Update note: Thanks to [Rishikesh (ऋषिकेश)](https://github.com/jaywalnut310/vits/issues/1), our interactive TTS demo is now available on [Colab Notebook](https://colab.research.google.com/drive/1CO61pZizDj7en71NQG_aqqKdGaA_SaBf?usp=sharing). | ||
|
||
<table style="width:100%"> | ||
<tr> | ||
<th>VITS at training</th> | ||
<th>VITS at inference</th> | ||
</tr> | ||
<tr> | ||
<td><img src="resources/fig_1a.png" alt="VITS at training" height="400"></td> | ||
<td><img src="resources/fig_1b.png" alt="VITS at inference" height="400"></td> | ||
</tr> | ||
</table> | ||
|
||
|
||
## Pre-requisites | ||
0. Python >= 3.6 | ||
0. Clone this repository | ||
0. Install python requirements. Please refer [requirements.txt](requirements.txt) | ||
1. You may need to install espeak first: `apt-get install espeak` | ||
0. Download datasets | ||
1. Download and extract the LJ Speech dataset, then rename or create a link to the dataset folder: `ln -s /path/to/LJSpeech-1.1/wavs DUMMY1` | ||
1. For mult-speaker setting, download and extract the VCTK dataset, and downsample wav files to 22050 Hz. Then rename or create a link to the dataset folder: `ln -s /path/to/VCTK-Corpus/downsampled_wavs DUMMY2` | ||
0. Build Monotonic Alignment Search and run preprocessing if you use your own datasets. | ||
```sh | ||
# Cython-version Monotonoic Alignment Search | ||
cd monotonic_align | ||
python setup.py build_ext --inplace | ||
|
||
# Preprocessing (g2p) for your own datasets. Preprocessed phonemes for LJ Speech and VCTK have been already provided. | ||
# python preprocess.py --text_index 1 --filelists filelists/ljs_audio_text_train_filelist.txt filelists/ljs_audio_text_val_filelist.txt filelists/ljs_audio_text_test_filelist.txt | ||
# python preprocess.py --text_index 2 --filelists filelists/vctk_audio_sid_text_train_filelist.txt filelists/vctk_audio_sid_text_val_filelist.txt filelists/vctk_audio_sid_text_test_filelist.txt | ||
``` | ||
|
||
|
||
## Training Exmaple | ||
```sh | ||
# LJ Speech | ||
python train.py -c configs/ljs_base.json -m ljs_base | ||
|
||
# VCTK | ||
python train_ms.py -c configs/vctk_base.json -m vctk_base | ||
``` | ||
|
||
|
||
## Inference Example | ||
See [inference.ipynb](inference.ipynb) | ||
|
||
## Running in Docker | ||
|
||
```sh | ||
docker run -itd --gpus all --name "Container name" -e NVIDIA_DRIVER_CAPABILITIES=compute,utility -e NVIDIA_VISIBLE_DEVICES=all "Image name" | ||
``` | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,15 +1,16 @@ | ||
Cython==0.29.21 | ||
librosa==0.8.0 | ||
matplotlib==3.3.1 | ||
numpy==1.21.6 | ||
phonemizer==2.2.1 | ||
scipy==1.5.2 | ||
tensorboard==2.3.0 | ||
torch==1.6.0 | ||
torchvision==0.7.0 | ||
Unidecode==1.1.1 | ||
pyopenjtalk==0.2.0 | ||
jamo==0.4.1 | ||
pypinyin==0.44.0 | ||
jieba==0.42.1 | ||
cn2an==0.5.17 | ||
Cython==0.29.21 | ||
librosa==0.8.0 | ||
matplotlib==3.3.1 | ||
numpy==1.21.6 | ||
phonemizer==2.2.1 | ||
scipy==1.5.2 | ||
tensorboard==2.3.0 | ||
torch==1.6.0 | ||
torchvision==0.7.0 | ||
Unidecode==1.1.1 | ||
pyopenjtalk==0.2.0 | ||
jamo==0.4.1 | ||
pypinyin==0.44.0 | ||
jieba==0.42.1 | ||
protobuf==3.19.0 | ||
cn2an==0.5.17 |