Skip to content

Commit

Permalink
Doc corrections (#3435)
Browse files Browse the repository at this point in the history
* Update 2020-09-09-sparse-attention.md

* Update MoQ-tutorial.md

---------

Co-authored-by: Logan Adams <[email protected]>
  • Loading branch information
goodship1 and loadams authored May 8, 2023
1 parent f3f4c44 commit 4132118
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions docs/_tutorials/MoQ-tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ title: "DeepSpeed Mixture-of-Quantization (MoQ)"
tags: training quantization
---

DeepSpeed introduces new support for model compression using quantization, called Mixture-of-Quantization (MoQ). MoQ is designed on top of QAT (Quantization-Aware Training), with the difference that it schedules various data precisions across the training process. It starts with quantizing the model with a high precision, such as FP16 or 16-bit quantization, and reduce the precision through a pre-defined schedule until reaching the target quantization bits (like 8-bit). Moreover, we use second-order information of the model parameters to dynamically adjust the quantization schedule for each of layer of the network separately. We have seen that by adding such schedule and using various data precision in the training process, we can quantize the model with better quality and preserve accuracy. For a better understanding of MoQ methodology, please refer to MoQ deep-dive, [here](https://www.deepspeed.ai/2021/05/04/MoQ.html).
DeepSpeed introduces new support for model compression using quantization, called Mixture-of-Quantization (MoQ). MoQ is designed on top of QAT (Quantization-Aware Training), with the difference that it schedules various data precisions across the training process. It starts with quantizing the model with a high precision, such as FP16 or 16-bit quantization, and reduce the precision through a pre-defined schedule until reaching the target quantization bits (like 8-bit). Moreover, we use second-order information of the model parameters to dynamically adjust the quantization schedule for each layer of the network separately. We have seen that by adding such schedule and using various data precision in the training process, we can quantize the model with better quality and preserve accuracy. For a better understanding of MoQ methodology, please refer to MoQ deep-dive, [here](https://www.deepspeed.ai/2021/05/04/MoQ.html).

Below, we use fine-tune for the GLUE tasks as an illustration of how to use MoQ.

Expand Down Expand Up @@ -71,7 +71,7 @@ Before fine-tuning the GLUE tasks using DeepSpeed MoQ, you need:

### DeepSpeed Configuration File

Prepare a config file `test.json` as below, please note following important parameters for quantization training:
Prepare a config file `test.json` as below, please note the following important parameters for quantization training:

```
{
Expand Down

0 comments on commit 4132118

Please sign in to comment.