From fea2bb130365534ba9f8beb5a331bea4e2db3fff Mon Sep 17 00:00:00 2001 From: tianyil1 Date: Tue, 11 Jun 2024 14:42:38 +0800 Subject: [PATCH] refine the readme with the default model 'Intel/neural-chat-7b-v3-3' Signed-off-by: tianyil1 --- comps/llms/text-generation/vllm/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/comps/llms/text-generation/vllm/README.md b/comps/llms/text-generation/vllm/README.md index 18553a0f90..3386315524 100644 --- a/comps/llms/text-generation/vllm/README.md +++ b/comps/llms/text-generation/vllm/README.md @@ -47,5 +47,5 @@ You have the flexibility to customize two parameters according to your specific ```bash export vLLM_LLM_ENDPOINT="http://xxx.xxx.xxx.xxx:8080" -export LLM_MODEL= # example: export LLM_MODEL="mistralai/Mistral-7B-v0.1" +export LLM_MODEL= # example: export LLM_MODEL="Intel/neural-chat-7b-v3-3" ```