From e082d4752e3db2acf4348a4522daba077bb58aec Mon Sep 17 00:00:00 2001 From: Minjia Zhang <33713995+minjiaz@users.noreply.github.com> Date: Tue, 10 Nov 2020 06:33:35 -0800 Subject: [PATCH] updating pld docs (#517) --- docs/_posts/2020-10-28-progressive-layer-dropping-news.md | 2 +- docs/_tutorials/progressive_layer_dropping.md | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/_posts/2020-10-28-progressive-layer-dropping-news.md b/docs/_posts/2020-10-28-progressive-layer-dropping-news.md index 03a39bc281be..baa7ab7acfe5 100755 --- a/docs/_posts/2020-10-28-progressive-layer-dropping-news.md +++ b/docs/_posts/2020-10-28-progressive-layer-dropping-news.md @@ -9,6 +9,6 @@ date: 2020-10-29 00:00:00 We introduce a new technology called progressive layer dropping (PLD) to speedup the pre-training of Transformer-based networks through efficient and robust compressed training. The pre-training step of Transformer networks often suffer from unbearable overall computational expenses. We analyze the training dynamics and stability of Transformer networks and propose PLD to sparsely update Transformer blocks following a progressive dropping schedule, which smoothly increases the layer dropping rate for each mini-batch as training evolves along both the temporal and the model depth dimension. PLD is able to allow the pre-training to be **2.5X faster** to get similar accuracy on downstream tasks and allows the training to be **24% faster** when training the same number of samples, not at the cost of excessive hardware resources. - * For detailed technology deep dive, see our [technical report](). + * For detailed technology deep dive, see our [technical report](https://arxiv.org/pdf/2010.13369.pdf). * For more information on how to use PLD, see our [Progressive layer dropping tutorial](https://www.deepspeed.ai/tutorials/progressive_layer_dropping/). * The source code for PLD is now available at the [DeepSpeed repo](https://github.com/microsoft/deepspeed). diff --git a/docs/_tutorials/progressive_layer_dropping.md b/docs/_tutorials/progressive_layer_dropping.md index 3cbb09529c1a..2ddbb69c58ad 100755 --- a/docs/_tutorials/progressive_layer_dropping.md +++ b/docs/_tutorials/progressive_layer_dropping.md @@ -3,7 +3,7 @@ title: "Accelerating Training of Transformer-Based Language Models with Progress --- -In this tutorial, we are going to introduce the progressive layer dropping (PLD) in DeepSpeed and provide examples on how to use PLD. PLD allows to train Transformer networks such as BERT 24% faster under the same number of samples and 2.5 times faster to get similar accuracy on downstream tasks. Detailed description of PLD and the experimental results are available in our [technical report](git push --set-upstream origin staging-pld-v1). +In this tutorial, we are going to introduce the progressive layer dropping (PLD) in DeepSpeed and provide examples on how to use PLD. PLD allows to train Transformer networks such as BERT 24% faster under the same number of samples and 2.5 times faster to get similar accuracy on downstream tasks. Detailed description of PLD and the experimental results are available in our [technical report](https://arxiv.org/pdf/2010.13369.pdf). To illustrate how to use PLD in DeepSpeed, we show how to enable PLD to pre-train a BERT model and fine-tune the pre-trained model on the GLUE datasets. @@ -152,4 +152,4 @@ The fine-tuning results can be found under the "logs" directory, and below are e | Bert_{base} (original) | 66.4 | 88.9/84.8 | 87.1/89.2 | 52.1 | 93.5 | 90.5 | 71.2/89.2 | 84.6/83.4 | 80.7 | | Bert_{base} (Our impl) | 67.8 | 88.0/86.0 | 89.5/89.2 | 52.5 | 91.2 | 87.1 | 89.0/90.6 | 82.5/83.4 | 82.1 | | PLD | 69.3 | 86.6/84.3 | 90.0/89.6 | 55.8 | 91.6 | 90.7 | 89.6/91.2 | 84.1/83.8 | 82.9 | -| Lr | 7e-5 | 9e-5 | 7e-5 | 5e-5 | 7e-5 | 9e-5 | 2e-4 | 3e-5 | | +| Lr | 7e-5 | 9e-5 | 7e-5 | 5e-5 | 7e-5 | 9e-5 | 2e-4 | 3e-5 | |