Skip to content
forked from NVIDIA/NeMo

Neural Modules: a toolkit for conversational AI

License

Notifications You must be signed in to change notification settings

redoctopus/NeMo

 
 

Repository files navigation

Project Status: Active – The project has reached a stable, usable state and is being actively developed. Documentation Status NeMo core license and license for collections in this repo Language grade: Python Total alerts Code style: black

NVIDIA NeMo

Create State of the Art Models

Installation

Introduction

NeMo is a toolkit for creating Conversational AI applications.

NeMo toolkit makes it possible for researchers to easily compose complex neural network architectures for conversational AI using reusable components - Neural Modules. Neural Modules are conceptual blocks of neural networks that take typed inputs and produce typed outputs. Such modules typically represent data layers, encoders, decoders, language models, loss functions, or methods of combining activations. The toolkit comes with extendable collections of pre-built modules for automatic speech recognition (ASR), natural language processing (NLP) and text synthesis (TTS). Built for speed, NeMo can utilize NVIDIA's Tensor Cores and scale out training to multiple GPUs and multiple nodes. NeMo has integration with NVIDIA Jarvis.

Release v0.11

  • This release improves ease of use with nemo models and module composition
  • New models such as Voice activity detection, Speaker identification, Matchboxnet Speech commands; Megatron BERT trained on bio medical data. For a complete list of models, see the table below.

Resources

Getting started

THE LATEST STABLE VERSION OF NeMo is 0.11 (Available via PIP).

Requirements

  1. Python 3.6 or 3.7
  2. PyTorch 1.4.* with GPU support
  3. (optional, for best performance) NVIDIA APEX. Install from here: https://github.com/NVIDIA/apex

Docker containers

NeMo docker container

You can use NeMo's docker container with all dependencies pre-installed

docker run --runtime=nvidia -it --rm -v --shm-size=16g -p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit stack=67108864 nvcr.io/nvidia/nemo:v0.11

If you are using the NVIDIA NGC PyTorch container follow these instructions

  • Pull the docker: docker pull nvcr.io/nvidia/pytorch:20.01-py3
  • Run:docker run --gpus all -it --rm -v <nemo_github_folder>:/NeMo --shm-size=8g -p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit stack=67108864 nvcr.io/nvidia/pytorch:20.01-py3
  • apt-get update && apt-get install -y libsndfile1 ffmpeg && pip install Cython
  • pip install nemo_toolkit Installs NeMo core only.
  • pip install nemo_toolkit[all] Installs NeMo core and ALL collections
  • pip install nemo_toolkit[asr] Installs NeMo core and ASR (Speech Recognition) collection
  • pip install nemo_toolkit[nlp] Installs NeMo core and NLP (Natural Language Processing) collection
  • pip install nemo_toolkit[tts] Installs NeMo core and TTS (Speech Synthesis) collection

See examples/start_here to get started with the simplest example.

Tutorials

Pre-trained models

Modality Model Trained on
ASR Jasper10x5DR_En LibriSpeech, WSJ, Mozilla Common Voice (en_1488h_2019-12-10), Fisher, Switchboard, and Singapore English National Speech Corpus (Part 1)
ASR QuartzNet15x5En LibriSpeech, WSJ, Mozilla Common Voice (en_1087h_2019-06-12), Fisher, and Switchboard
ASR QuartzNet15x5Zh AISHELL-2 Mandarin
NLP BERT base uncased English Wikipedia and BookCorpus dataset seq len <= 512
NLP BERT large uncased English Wikipedia and BookCorpus dataset seq len <= 512
TTS Tacotron2 LJspeech
TTS WaveGlow LJspeech

DEVELOPMENT

If you'd like to use master branch and/or develop NeMo you can run "reinstall.sh" script.

Documentation (master branch).

Installing From Github

If you prefer to use NeMo's latest development version (from GitHub) follow the steps below:

  1. Clone the repository git clone https://github.com/NVIDIA/NeMo.git
  2. Go to NeMo folder and re-install the toolkit with collections:
./reinstall.sh

Style tests

python setup.py style  # Checks overall project code style and output issues with diff.
python setup.py style --fix  # Tries to fix error in-place.
python setup.py style --scope=tests  # Operates within certain scope (dir of file).

NeMo Documentation

Version Status Description
Master Documentation Status Documentation of the master branch
Latest Documentation Status Documentation of the latest (i.e. master) branch
v0.11.0 Documentation Status Documentation of the v0.11.0 release
v0.10.1 Documentation Status Documentation of the v0.10.1 release

NeMo Test Suite

NeMo contains test suite divided into 5 subsets:

  1. unit: unit tests, i.e. testing a single, well isolated functionality
  2. integration: tests checking the elements when integrated into subsystems
  3. system: tests working at the highest integration level
  4. acceptance: tests checking whether the developed product/model passes the user defined acceptance criteria
  5. docs: tests related to documentation (deselect with '-m "not docs"')

The user can run all the tests locally by simply executing:

pytest

In order to run a subset of tests one can use the -m argument followed by the subset name, e.g. for system subset:

pytest -m system

By default, all the tests will be executed on GPU. There is also an option to run the test suite on CPU by passing the --cpu command line argument, e.g.:

pytest -m unit --cpu

Citation

If you are using NeMo please cite the following publication

@misc{nemo2019,
    title={NeMo: a toolkit for building AI applications using Neural Modules},
    author={Oleksii Kuchaiev and Jason Li and Huyen Nguyen and Oleksii Hrinchuk and Ryan Leary and Boris Ginsburg and Samuel Kriman and Stanislav Beliaev and Vitaly Lavrukhin and Jack Cook and Patrice Castonguay and Mariya Popova and Jocelyn Huang and Jonathan M. Cohen},
    year={2019},
    eprint={1909.09577},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

About

Neural Modules: a toolkit for conversational AI

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 99.7%
  • Other 0.3%