This is a project to improve the speech separation task. In this project, Audio-only and Audio-Visual deep learning separation models are modified based on the paper Looking to Listen at the Cocktail Party[1].
To run this project on colab you have to download this repository in your google drive. Next you have to run Looking_to_listen.ipynb in colab.
Remember to clean your trash folder in google drive when downloading Audio and Video for many files and folders get deleted which ends up filling your google drive space.
AVspeech dataset : contains 4700 hours of video segments, from a total of 290k YouTube videos.
Customized video and audio downloader are provided in audio and video. (based on youtube-dl,sox,ffmpeg)
Instrouction for generating data is under Data.
There are several preprocess functions in the utils. Including STFT, iSTFT, power-law compression, complex mask and modified hyperbolic tangent[5] etc. Below is the preprocessing for audio data:
For the visual part, MTCNN is applied to detect face and correct it by checking the provided face center[2]. The visual frames are transfered to 1792 (lowest layer in the network that is not spatially varying) face embeddings features with pre-trained FaceNet model[3]. Below is the preprocessing for visual data:
Audio-only model is provided in model_v1 and Audio-visual model is provided in model_v2.
Follwing is the brief structure of Audio-Visual model, some layers are revised corresponding to our customized compression and dataset.[1]
Loss function : modified discriminative loss function inspired from paper[4].
Optimizer : Adam
Apply complex ratio mask (cRM) to STFT of the mixed speech, we can produce the STFT of single speaker’s speech.
Samples to complete the prediction are provided in eval file in model_v1 and model_v2.