Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Progressive layer dropping docs #499

Merged
merged 4 commits into from
Nov 9, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 1 addition & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -155,8 +155,7 @@ all repos using our CLA.
This project has adopted the [Microsoft Open Source Code of
Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the
[Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact
[[email protected]](mailto:[email protected]) with any additional questions or
comments.
[[email protected]](mailto:[email protected]) with any additional questions or comments.

# Publications
1. Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, Yuxiong He. (2019) ZeRO: Memory Optimization Towards Training A Trillion Parameter Models. [ArXiv:1910.02054](https://arxiv.org/abs/1910.02054)
5 changes: 5 additions & 0 deletions docs/_posts/2020-10-28-progressive-layer-dropping-news.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
* We introduce a new technology called progressive layer dropping (PLD) to speedup the pre-training of Transformer-based networks through efficient and robust compressed training. The pre-training step of Transformer networks often suffer from unbearable overall computational expenses. We analyze the training dynamics and stability of Transformer networks and propose PLD to sparsely update Transformer blocks following a progressive dropping schedule, which smoothly increases the layer dropping rate for each mini-batch as training evolves along both the temporal and the model depth dimension. PLD is able to allow the pre-training to be **2.5X faster** to get similar accuracy on downstream tasks and allows the training to be **24% faster** when training the same number of samples, not at the cost of excessive hardware resources.

* For detailed technology deep dive, see our [technical report](XXX).
* For more information on how to use PLD, see our [Progressive layer dropping tutorial](https://www.deepspeed.ai/tutorials/progressive_layer_dropping/).
* The source code for PLD is now available at the [DeepSpeed repo](https://github.com/microsoft/deepspeed).
130 changes: 130 additions & 0 deletions docs/_tutorials/progressive_layer_dropping.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,130 @@
---
title: "Accelerating Training of Transformer-Based Language Models with Progressive Layer Dropping"

---

In this tutorial, we are going to introduce the progressive layer dropping (PLD) in DeepSpeed and provide examples on how to use PLD. PLD allows to train Transformer networks such as BERT 24% faster under the same number of samples and 2.5 times faster to get similar accuracy on downstream tasks. Detailed description of PLD and the experimental results are available in our [technical report](XXX).

To illustrate how to use PLD in DeepSpeed, we show how to enable PLD to pre-train a BERT model and fine-tune the pre-trained model on the GLUE datasets.

## Running Pre-training with DeepSpeed and PLD

To perform pre-training, one needs to first prepare the datasets. For this part, please refer our [BERT Pre-training](/tutorials/bert-pretraining/) post, which contains detailed information on how to do data downloading and pre-processing. For the below experiment, we use Wikipedia text and Bookcorpus, similar as [Devlin et. al.](https://arxiv.org/abs/1810.04805).

The main part of pre-training is done in `deepspeed_train.py`, which has
already been modified to use DeepSpeed. The `ds_train_bert_progressive_layer_drop_bsz4k_seq128.sh` is the shell script that launches the pre-training with DeepSpeed and PLD.

```shell
bash ds_train_bert_progressive_layer_drop_bsz4k_seq128.sh
```

Most of the flags in the above script should be familiar if you have stepped through the BERT pre-training [tutorial](/tutorials/bert-pretraining/). To enable PLD, one needs to add the following flags. The first flag enables progressive layer dropping on Transformer blocks. The second flag determines the progressive drop schedule. We recommend 0.5, a value that worked well in our experiments.

--progressive_layer_drop --layerdrop_theta 0.5

Setting these flags should print a message as below:

Enabled progressive layer dropping (theta = 0.5).

The `deepspeed_bsz4k_progressive_layer_drop_config_seq128.json` file allows users to specify DeepSpeed options in terms of batch size, micro batch size, optimizer, learning rate, sequence length, and other parameters. Below is the DeepSpeed configuration file we use for running BERT and PLD.

```json
{
"train_batch_size": 4096,
"train_micro_batch_size_per_gpu": 16,
"steps_per_print": 1000,
"prescale_gradients": true,
"gradient_predivide_factor": 8,
"optimizer": {
"type": "Adam",
"params": {
"lr": 1e-3,
"weight_decay": 0.01,
"bias_correction": false
}
},
"gradient_clipping": 1.0,

"wall_clock_breakdown": false,

"fp16": {
"enabled": true,
"loss_scale": 0
}
}
```

Note that the above configuration assumes training on 64 X 32GB V100 GPUs. Each GPU uses a micro batch size of 16 and accumulates gradients until the effective batch size reaches 4096. If you have GPUs with less memory, you may need to reduce "train_micro_batch_size_per_gpu". Alternatively, if you have more GPUs, you can increase the "train_batch_size" to increase training speed. We use the following hyperparameters for pre-training BERT with PLD enabled.

| Parameters | Value |
| ------------------------------ | ----------------------- |
| Effective batch size | 4K |
| Train micro batch size per GPU | 16 |
| Optimizer | Adam |
| Peak learning rate | 1e-3 |
| Sequence-length | 128 |
| Learning rate scheduler | Warmup linear decay exp |
| Warmup ratio | 0.02 |
| Decay rate | 0.99 |
| Decay step | 1000 |
| Weight decay | 0.01 |
| Gradient clipping | 1.0 |

Table 1. Pre-training hyperparameters

**Note:** DeepSpeed now supports PreLayerNorm as the default way for training BERT, because of its ability to avoid vanishing gradient, stablize optimization, and performance gains, as described in our fastest BERT training [blog post](https://www.deepspeed.ai/news/2020/05/27/fastest-bert-training.html). We therefore support the switchable Transformer block directly on the the BERT with PreLayerNorm. The implementation can be found at "example\bing_bert\nvidia\modelingpreln_layerdrop.py".

## Fine-tuning with DeepSpeed on GLUE Tasks

We use GLUE for fine-tuning tasks. GLUE (General Language Understanding Evaluation benchmark) (https://gluebenchmark.com/) is a collection of sentence or sentence-pair natural language understanding tasks including question answering, sentiment analysis, and textual entailment. It is designed to favor sample-efficient learning and knowledge-transfer across a range of different linguistic tasks in different domains.

One can download all GLUE data using the provided helper [script](https://gist.github.com/W4ngatang/60c2bdb54d156a41194446737ce03e2e). Once the data has been downloaded, one can set up the data and move the data to "/data/GlueData", which is the default location for hosting GLUE data. We then can use the PLD pre-trained BERT model checkpoint to run the fine-tuning.

The main part of fine-tuning is done in `run_glue_classifier_bert_base.py`, which has
already been modified to use DeepSpeed. Before the fine-tuning, one needs to specify the BERT model configuration through the following config in `run_glue_classifier_bert_base.py`. In this case, it has already been modified to be the same as the configuration of the pre-trained model.

```json
bert_model_config = {
"vocab_size_or_config_json_file": 119547,
"hidden_size": 768,
"num_hidden_layers": 12,
"num_attention_heads": 12,
"intermediate_size": 3072,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"attention_probs_dropout_prob": 0.1,
"max_position_embeddings": 512,
"type_vocab_size": 2,
"initializer_range": 0.02
}
```

Next, one can load a DeepSpeed style checkpoint with the following command, which has also already been added in the script.

```shell
model.load_state_dict(checkpoint_state_dict['module'], strict=False)
```

Finally, the `run_glue_classifier_bert_base.sh` script invokes pre-training and setups several hyperparameters relevant to fine-tuning.

```shell
bash run_glue_bert_base_finetune.sh [task] [batch size] [learning rate] [number of epochs] [job name] [checkpoint path]
```

An example would be:

```shell
bash run_glue_bert_base_finetune.sh MNLI 32 3e-5 5 "fine_tune_MNLI" deepspeed_checkpoint.pt
```



### Expected Results

The fine-tuning results can be found under the "logs" directory, and below are expected results for PLD on GLUE tasks. The "Lr" row indicates the learning rate we use for getting the corresponding accuracy result for each task.

| | RTE | MRPC | STS-B | CoLA | SST-2 | QNLI | QQP | MNLI-m/mm |
| ------- | :--: | --------- | --------- | ---- | ----- | ---- | --------- | --------- |
| Metrics | Acc. | F1/Acc. | PCC/SCC | Acc. | Acc. | Acc. | F1/Acc. | Acc. |
| PLD | 69.3 | 86.6/84.3 | 90.0/89.6 | 55.8 | 91.6 | 90.7 | 89.6/91.2 | 84.1/83.8 |
| Lr | 7e-5 | 9e-5 | 7e-5 | 5e-5 | 7e-5 | 9e-5 | 2e-4 | 3e-5 |
Loading