Skip to content

Commit

Permalink
Minor grammatical correction. (#94)
Browse files Browse the repository at this point in the history
Not even sure if you need this PR, just thought I'd proofread the README.

Changed a only few sentences.

Co-authored-by: ZhaoFancy <[email protected]>
  • Loading branch information
cvyl and ZhaoFancy authored Nov 14, 2023
1 parent 7fef28e commit 012c742
Showing 1 changed file with 6 additions and 8 deletions.
14 changes: 6 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,15 +94,15 @@ Note that the `latest` tag always points to the latest code in the `main`
branch. To test a stable version, please replace it with a specific
[tag](https://github.com/01-ai/Yi/tags).

If you prefer trying out with your local development environment. First, create
If you prefer to try out with your local development environment. First, create
a virtual environment and clone this repo. Then install the dependencies with
`pip install -r requirements.txt`. For the best performance, we recommend you
also install the latest version (`>=2.3.3`) of
[flash-attention](https://github.com/Dao-AILab/flash-attention#installation-and-features).

### 2. Download the model (optional)

By default the model weights and tokenizer will be downloaded from
By default, the model weights and tokenizer will be downloaded from
[HuggingFace](https://huggingface.co/01-ai) automatically in the next step. You
can also download them manually from the following places:

Expand Down Expand Up @@ -170,7 +170,7 @@ The Arctic is a place of great beauty. The ice and snow are a

</details>

For more advanced usage, please refer the
For more advanced usage, please refer to the
[doc](https://github.com/01-ai/Yi/tree/main/demo).

#### 3.2 Finetuning from the base model:
Expand All @@ -179,8 +179,7 @@ For more advanced usage, please refer the
bash finetune/scripts/run_sft_Yi_6b.sh
```

Once finished, you can compare the finetuned model and the base model with the
following command:
Once finished, you can compare the finetuned model and the base model with the following command:

```bash
bash finetune/scripts/run_eval.sh
Expand All @@ -199,15 +198,15 @@ python quantization/gptq/quant_autogptq.py \
--trust_remote_code
```

Once finished, you can then evaluate the resulted model as follows:
Once finished, you can then evaluate the resulting model as follows:

```bash
python quantization/gptq/eval_quantized_model.py \
--model /quantized_model \
--trust_remote_code
```

For more detailed explanation, please read the [doc](https://github.com/01-ai/Yi/tree/main/quantization/gptq)
For a more detailed explanation, please read the [doc](https://github.com/01-ai/Yi/tree/main/quantization/gptq)

##### AWQ
```bash
Expand All @@ -227,7 +226,6 @@ python quantization/awq/eval_quantized_model.py \

For more detailed explanation, please read the [doc](https://github.com/01-ai/Yi/tree/main/quantization/awq)


## Disclaimer

We use data compliance checking algorithms during the training process, to
Expand Down

0 comments on commit 012c742

Please sign in to comment.