Skip to content

Commit

Permalink
Minjiaz/zero offload (microsoft#382)
Browse files Browse the repository at this point in the history
Co-authored-by: Jeff Rasley <[email protected]>
  • Loading branch information
minjiaz and jeffra authored Sep 10, 2020
1 parent be4b94b commit 59ce90d
Show file tree
Hide file tree
Showing 3 changed files with 23 additions and 1 deletion.
6 changes: 6 additions & 0 deletions docs/_pages/features.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,6 +103,12 @@ during the backward computation, the activation gradients are short lived while
gradients are long lived. CMO transfers activation checkpoints and parameter gradients
to contiguous buffers preventing memory fragmentation.

## ZeRO-Offload

ZeRO-Offload pushes the boundary of the maximum model size that can be trained efficiently using minimal GPU resources, by exploiting computational and memory resources on both GPUs and their host CPUs. It allows training up to 13-billion-parameter models on a single NVIDIA V100 GPU, 10x larger than the state-of-the-art, while retaining high training throughput of over 30 teraflops per GPU.

For more details see the [ZeRO-Offload release blog]( https://www.microsoft.com/en-us/research/?p=689370&secret=iSlooB), and [tutorial](/tutorials/zero-offload/) on integration with DeepSpeed.

## Additional Memory and Bandwidth Optimizations

### Smart Gradient Accumulation
Expand Down
14 changes: 14 additions & 0 deletions docs/_posts/2020-09-09-ZeRO-Offload.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
---
layout: single
title: "10x bigger model training on a single GPU with ZeRO-Offload"
excerpt: ""
categories: news
new_post: true
date: 2020-09-09 00:00:00
---

We introduce a new technology called ZeRO-Offload to enable **10X bigger model training on a single GPU**. ZeRO-Offload extends ZeRO-2 to leverage both CPU and GPU memory for training large models. Using a machine with **a single GPU**, our users now can run **models of up to 13 billion parameters** without running out of memory, 10x bigger than the existing approaches, while obtaining competitive throughput. This feature democratizes multi-billion-parameter model training and opens the window for many deep learning practitioners to explore bigger and better models.

* For more information on ZeRO-Offload, see our [press release]( {{ site.press_release_v3 }} ).
* For more information on how to use ZeRO-Offload, see our [ZeRO-Offload tutorial](https://www.deepspeed.ai/tutorials/zero-offload/).
* The source code for ZeRO-Offload can be found in the [DeepSpeed repo](https://github.com/microsoft/deepspeed).
4 changes: 3 additions & 1 deletion docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,6 @@ efficient, and effective.
<p align="center"><i><b>10x Larger Models</b></i></p>
<p align="center"><i><b>10x Faster Training</b></i></p>
<p align="center"><i><b>Minimal Code Change</b></i></p>

DeepSpeed can train DL models with over a hundred billion parameters on current
generation of GPU clusters, while achieving over 10x in system performance
compared to the state-of-art. Early adopters of DeepSpeed have already produced
Expand Down Expand Up @@ -157,6 +156,9 @@ overview](/features/) for descriptions and usage.
* Activation Partitioning
* Constant Buffer Optimization
* Contiguous Memory Optimization
* [ZeRO-Offload](/features/#zero-offload)
* Leverage both CPU/GPU memory for model training
* Support 10B model training on a single GPU
* [Additional Memory and Bandwidth Optimizations](/features/#additional-memory-and-bandwidth-optimizations)
* Smart Gradient Accumulation
* Communication/Computation Overlap
Expand Down

0 comments on commit 59ce90d

Please sign in to comment.