Skip to content

Releases: NVIDIA/NeMo

NVIDIA Neural Modules 2.2.0rc3

25 Feb 12:47
b21e079
Compare
Choose a tag to compare
Pre-release

Prerelease: NVIDIA Neural Modules 2.2.0rc3 (2025-02-25)

NVIDIA Neural Modules 2.2.0rc2

17 Feb 17:04
798b676
Compare
Choose a tag to compare
Pre-release

Prerelease: NVIDIA Neural Modules 2.2.0rc2 (2025-02-17)

NVIDIA Neural Modules 2.2.0rc1

04 Feb 08:02
18e2bd8
Compare
Choose a tag to compare
Pre-release

Prerelease: NVIDIA Neural Modules 2.2.0rc1 (2025-02-04)

NVIDIA Neural Modules 2.2.0rc0

02 Feb 23:30
2f66ada
Compare
Choose a tag to compare
Pre-release

Prerelease: NVIDIA Neural Modules 2.2.0rc0 (2025-02-02)

NVIDIA Neural Modules 2.1.0

03 Jan 10:31
633cb60
Compare
Choose a tag to compare

Highlights

  • Training
    • Fault Tolerance
      • Straggler Detection
      • Auto Relaunch
  • LLM & MM
    • MM models
      • Llava-next
      • Llama 3.2
    • Sequence Model Parallel for NeVa
    • Enable Energon
    • SigLIP (NeMo 1.0 only)
    • LLM 2.0 migration
      • Starcoder2
      • Gemma 2
      • T5
      • Baichuan
      • BERT
      • Mamba
      • ChatGLM
    • DoRA support
  • Export
    • Nemo 2.0 base model export path for NIM
    • PTQ in Nemo 2.0
  • ASR
    • Timestamps with TDT decoder
    • Timestamps option with .transcribe()

Detailed Changelogs:

ASR

Changelog

TTS

Changelog

NLP / NMT

Changelog

Text Normalization / Inverse Text Normalization

Changelog

Export

Changelog

Bugfixes

Changelog

Uncategorized:

Changelog
Read more

NVIDIA Neural Modules 2.1.0rc2

21 Dec 18:54
49ef560
Compare
Choose a tag to compare
Pre-release

Prerelease: NVIDIA Neural Modules 2.1.0rc2 (2024-12-21)

NVIDIA Neural Modules 2.1.0rc1

20 Dec 08:48
526a525
Compare
Choose a tag to compare
Pre-release

Prerelease: NVIDIA Neural Modules 2.1.0rc1 (2024-12-20)

NVIDIA Neural Modules 2.1.0rc0

11 Dec 23:16
ceeafa4
Compare
Choose a tag to compare
Pre-release
[🤠]: Howdy folks, let's release NeMo `r2.1.0` ! (#11556)

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: pablo-garay <[email protected]>

NVIDIA Neural Modules 2.0.0

14 Nov 18:57
e938df3
Compare
Choose a tag to compare

Highlights

Large language models & Multi modal

  • Training
    • Long context recipe
    • PyTorch Native FSDP 1
  • Models
    • Llama 3
    • Mixtral
    • Nemotron
  • NeMo 1.0
    • SDXL (text-2-image)
    • Model Opt
      • Depth Pruning (docs)
      • Logit based Knowledge Distillation (docs)

Export

  • TensorRT-LLM v0.12 integration
  • LoRA support for vLLM
  • FP8 checkpoint

ASR

  • Parakeet large (ASR with PnC model)
  • Added Uzbek offline and Gregorian streaming models
  • Optimization feature for efficient bucketing to improve bs consumption on GPUs

Detailed Changelogs

ASR

Changelog

TTS

Changelog

NLP / NMT

Changelog

NVIDIA Neural Modules 2.0.0rc1

15 Aug 21:55
579983f
Compare
Choose a tag to compare

Highlights

Large language models

  • PEFT: QLoRA support, LoRA/QLora for Mixture-of-Experts (MoE) dense layer
  • State Space Models & Hybrid Architecture support (Mamba2 and NV-Mamba2-hybrid)
  • Support Nemotron, Minitron, Gemma2, Qwen, RAG
  • Custom Tokenizer training in NeMo
  • Update the Auto-Configurator for EP, CP and FSDP

Multimodal

  • NeVA: Add SOTA LLM backbone support (Mixtral/LLaMA3) and suite of model parallelism support (PP/EP)
  • Support Language Instructed Temporal-Localization Assistant (LITA) on top of video NeVA

ASR

  • SpeechLM and SALM
  • Adapters for Canary Customization
  • Pytorch allocator in PyTorch 2.2 improves training speed up to 30% for all ASR models
  • Cuda Graphs for Transducer Inference
  • Replaced webdataset with Lhotse - gives up to 2x speedup
  • Transcription Improvements - Speedup and QoL Changes
  • ASR Prompt Formatter for multimodal Canary

Export & Deploy

  • In framework PyTriton deployment with backends: - PyTorch - vLLM - TRT-LLM update to 0.10
  • TRT-LLM C++ runtime

Detailed Changelogs

ASR

Changelog

TTS

Changelog

LLM/Multimodal

Changelog
Read more