Skip to content
This repository has been archived by the owner on Oct 25, 2024. It is now read-only.

[DOC]Add modelscope example #1578

Merged
merged 8 commits into from
Jun 14, 2024
Merged

[DOC]Add modelscope example #1578

merged 8 commits into from
Jun 14, 2024

Conversation

intellinjun
Copy link
Contributor

Type of Change

feature or bug fix or documentation or others
API changed or not
not

Description

Add modelscope example

Signed-off-by: intellinjun <[email protected]>
Copy link

github-actions bot commented May 30, 2024

⚡ Required checks status: All passing 🟢

Groups summary

🟢 Format Scan Tests workflow
Check ID Status Error details
format-scan (pylint) success
format-scan (bandit) success
format-scan (cloc) success
format-scan (cpplint) success

These checks are required after the changes to intel_extension_for_transformers/transformers/modeling/modeling_auto.py.

🟢 Optimize Unit Test workflow
Check ID Status Error details
optimize-unit-test-baseline success
optimize-unit-test-PR-test success
Genreate-OptimizeUT-Report success

These checks are required after the changes to intel_extension_for_transformers/transformers/modeling/modeling_auto.py.

🟢 NeuralChat Unit Test
Check ID Status Error details
neuralchat-unit-test-baseline success
neuralchat-unit-test-PR-test success
Generate-NeuralChat-Report success

These checks are required after the changes to intel_extension_for_transformers/transformers/modeling/modeling_auto.py.

🟢 Engine Unit Test workflow
Check ID Status Error details
engine-unit-test-baseline success
engine-unit-test-PR-test success
Genreate-Engine-Report success

These checks are required after the changes to intel_extension_for_transformers/transformers/modeling/modeling_auto.py.

🟢 Chat Bot Test workflow
Check ID Status Error details
call-inference-llama-2-7b-chat-hf / inference test success
call-inference-mpt-7b-chat / inference test success

These checks are required after the changes to intel_extension_for_transformers/transformers/modeling/modeling_auto.py.


Thank you for your contribution! 💜

Note
This comment is automatically generated and will be updates every 180 seconds within the next 6 hours. If you have any other questions, contact VincyZhang or XuehaoSun for help.

@@ -0,0 +1,24 @@
# ModelScope with ITREX

Intel extension for transformers(ITREX) support almost all the LLMs in Pytorch format from ModelScope such as phi,Qwen,ChatGLM,Baichuan,gemma,etc.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

phi,Qwen,ChatGLM,Baichuan,gemma,etc.

phi, Qwen, ChatGLM, Baichuan, gemma, etc.

Signed-off-by: intellinjun <[email protected]>

ITREX provides a script that demonstrates the use of modelscope. Run it with the following command:
```bash
numactl -m 0 -C 0-55 python run_modelscope_example.py --model_path=qwen/Qwen-7B --prompt=你好
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

numactl -l -C xx-xx? if user across sockets?
Add a note explaining why adding numactl is necessary (to improve performance and teach them how to bind core_id).


ITREX provides a script that demonstrates the use of modelscope. Run it with the following command:
```bash
numactl -m 0 -C 0-55 python run_modelscope_example.py --model_path=qwen/Qwen-7B --prompt=你好
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

numactl -m 0 -C 0-55 python run_modelscope_example.py --model_path=qwen/Qwen-7B --prompt=你好
change to
OMP_NUM_THREADS= numactl -m -C python run_modelscope_example.py
--model <MODEL_NAME_OR_PATH>
--prompt=你好

```

## Supported and Validated Models
We have validated the majority of existing models using modelscope==1.13.1:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please add the requirment.txt

Copy link
Contributor

@a32543254 a32543254 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@@ -0,0 +1,24 @@
# ModelScope with ITREX

Intel extension for transformers(ITREX) support almost all the LLMs in Pytorch format from ModelScope such as phi, Qwen, ChatGLM, Baichuan, gemma, etc.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Intel extension for transformers
Intel® Extension for Transformers

Signed-off-by: intellinjun <[email protected]>
Signed-off-by: intellinjun <[email protected]>
@intellinjun intellinjun requested a review from PenghuiCheng as a code owner May 30, 2024 06:06
@kevinintel kevinintel merged commit 85f2495 into main Jun 14, 2024
22 checks passed
@kevinintel kevinintel deleted the modelscope_example branch June 14, 2024 04:19
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants