diff --git a/comps/embeddings/README.md b/comps/embeddings/README.md index 407644ada..8ac6dfe0c 100644 --- a/comps/embeddings/README.md +++ b/comps/embeddings/README.md @@ -44,8 +44,7 @@ First, you need to start a TEI service. ```bash your_port=8090 model="BAAI/bge-large-en-v1.5" -revision="refs/pr/5" -docker run -p $your_port:80 -v ./data:/data --name tei_server -e http_proxy=$http_proxy -e https_proxy=$https_proxy --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 --model-id $model --revision $revision +docker run -p $your_port:80 -v ./data:/data --name tei_server -e http_proxy=$http_proxy -e https_proxy=$https_proxy --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 --model-id $model ``` Then you need to test your TEI service using the following commands: @@ -88,8 +87,7 @@ First, you need to start a TEI service. ```bash your_port=8090 model="BAAI/bge-large-en-v1.5" -revision="refs/pr/5" -docker run -p $your_port:80 -v ./data:/data --name tei_server -e http_proxy=$http_proxy -e https_proxy=$https_proxy --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 --model-id $model --revision $revision +docker run -p $your_port:80 -v ./data:/data --name tei_server -e http_proxy=$http_proxy -e https_proxy=$https_proxy --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 --model-id $model ``` Then you need to test your TEI service using the following commands: