Skip to content

According to funcwj's uPIT, the training code supporting multi-gpu is written, and the Dataloader is reconstructed.

Notifications You must be signed in to change notification settings

JusperLee/UtterancePIT-Speech-Separation

Repository files navigation

UtterancePIT-Speech-Separation

According to funcwj's uPIT, the training code supporting multi-gpu is written, and the Dataloader is reconstructed.

If you want to see the funcwj code, this is his repository link.

uPIT-for-speech-separation

Demo Pages: Results of pure speech separation model

Accomplished goal

  • Support Multi-GPU Training
  • Use the Dataloader Method That Comes With Pytorch
  • Provide Pre-Training Models

Python Library Version

  • Pytorch==1.3.0
  • tqdm==4.32.1
  • librosa==0.7.1
  • scipy==1.3.0
  • numpy==1.16.4
  • PyYAML==5.1.1

How to Using This Repository

  1. Generate dataset using create-speaker-mixtures.zip with WSJ0 or TIMI

  2. Prepare scp file(The content of the scp file is "filename path")

     python create_scp.py
  3. Prepare cmvn(Cepstral mean and variance normalization (CMVN) is a computationally efficient normalization technique for robust speech recognition.).

     #Calculated by the compute_cmvn.py script: 
     python compute_cmvn.py ./tt_mix.scp ./cmvn.dict
  4. Modify the contents of yaml, mainly to modify the scp address, cmvn address. At the same time, the number of num_spk in run_pit.py is modified.

  5. Training:

    sh train.sh
  6. Inference:

    sh test.sh
    

Reference

  • Kolbæk M, Yu D, Tan Z H, et al. Multitalker speech separation with utterance-level permutation invariant training of deep recurrent neural networks[J]. IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP), 2017, 25(10): 1901-1913.
  • https://github.com/funcwj/uPIT-for-speech-separation

About

According to funcwj's uPIT, the training code supporting multi-gpu is written, and the Dataloader is reconstructed.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published