Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update Model Performance Documentation with Yi-34B variations Result #243

Merged
merged 3 commits into from
Dec 9, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 19 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,6 +94,25 @@ sequence length and can be extended to 32K during inference time.

</details>

## Ecosystem

🤗 You are encouraged to create a PR and share your awesome work built on top of
the Yi series models.

- Serving
- [ScaleLLM](https://github.com/vectorch-ai/ScaleLLM#supported-models): Efficiently run Yi models locally.
- Quantization
- [TheBloke/Yi-34B-GGUF](https://huggingface.co/TheBloke/Yi-34B-GGUF)
- [TheBloke/Yi-34B-GPTQ](https://huggingface.co/TheBloke/Yi-34B-GPTQ)
- Finetuning
- [NousResearch/Nous-Capybara-34B](https://huggingface.co/NousResearch/Nous-Capybara-34B)
- [SUSTech/SUS-Chat-34B](https://huggingface.co/SUSTech/SUS-Chat-34B): This
model ranks first among all models below 70B and has outperformed the twice
larger
[deepseek-llm-67b-chat](https://huggingface.co/deepseek-ai/deepseek-llm-67b-chat).
You can check the result in [🤗 Open LLM
Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).

## Model Performance

### Base Model Performance
Expand Down Expand Up @@ -374,19 +393,6 @@ python quantization/awq/eval_quantized_model.py \

For more detailed explanation, please read the [doc](https://github.com/01-ai/Yi/tree/main/quantization/awq)

## Ecosystem

🤗 You are encouraged to create a PR and share your awesome work built on top of
the Yi series models.

- Serving
- [ScaleLLM](https://github.com/vectorch-ai/ScaleLLM#supported-models): Efficiently run Yi models locally.
- Quantization
- [TheBloke/Yi-34B-GGUF](https://huggingface.co/TheBloke/Yi-34B-GGUF)
- [TheBloke/Yi-34B-GPTQ](https://huggingface.co/TheBloke/Yi-34B-GPTQ)
- Finetuning
- [NousResearch/Nous-Capybara-34B](https://huggingface.co/NousResearch/Nous-Capybara-34B)

## FAQ

1. **What dataset was this trained with?**
Expand Down
Loading