Skip to content

Commit

Permalink
unify default reranking model with BAAI/bge-reranker-base (#623)
Browse files Browse the repository at this point in the history
Signed-off-by: chensuyue <[email protected]>
Signed-off-by: ZePan110 <[email protected]>
  • Loading branch information
chensuyue authored Sep 10, 2024
1 parent 8a11413 commit 48d4e53
Show file tree
Hide file tree
Showing 15 changed files with 22 additions and 22 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ The initially supported `Microservices` are described in the below table. More `
<tr>
<td rowspan="2"><a href="./comps/embeddings/README.md">Embedding</a></td>
<td rowspan="2"><a href="https://www.langchain.com">LangChain</a>/<a href="https://www.llamaindex.ai">LlamaIndex</a></td>
<td rowspan="2"><a href="https://huggingface.co/BAAI/bge-large-en-v1.5">BAAI/bge-large-en-v1.5</a></td>
<td rowspan="2"><a href="https://huggingface.co/BAAI/bge-base-en-v1.5">BAAI/bge-base-en-v1.5</a></td>
<td><a href="https://github.com/huggingface/tei-gaudi">TEI-Gaudi</a></td>
<td>Gaudi2</td>
<td>Embedding on Gaudi2</td>
Expand All @@ -76,7 +76,7 @@ The initially supported `Microservices` are described in the below table. More `
<tr>
<td rowspan="2"><a href="./comps/reranks/README.md">Reranking</a></td>
<td rowspan="2"><a href="https://www.langchain.com">LangChain</a>/<a href="https://www.llamaindex.ai">LlamaIndex</a></td>
<td ><a href="https://huggingface.co/BAAI/bge-reranker-large">BAAI/bge-reranker-large</a></td>
<td ><a href="https://huggingface.co/BAAI/bge-reranker-base">BAAI/bge-reranker-base</a></td>
<td><a href="https://github.com/huggingface/tei-gaudi">TEI-Gaudi</a></td>
<td>Gaudi2</td>
<td>Reranking on Gaudi2</td>
Expand Down
2 changes: 1 addition & 1 deletion comps/dataprep/redis/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ First, you need to start a TEI service.

```bash
your_port=6006
model="BAAI/bge-large-en-v1.5"
model="BAAI/bge-base-en-v1.5"
revision="refs/pr/5"
docker run -p $your_port:80 -v ./data:/data --name tei_server -e http_proxy=$http_proxy -e https_proxy=$https_proxy --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.2 --model-id $model --revision $revision
```
Expand Down
2 changes: 1 addition & 1 deletion comps/dataprep/redis/langchain/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@

# Embedding model

EMBED_MODEL = os.getenv("EMBED_MODEL", "BAAI/bge-large-en-v1.5")
EMBED_MODEL = os.getenv("EMBED_MODEL", "BAAI/bge-base-en-v1.5")

# Redis Connection Information
REDIS_HOST = os.getenv("REDIS_HOST", "localhost")
Expand Down
8 changes: 4 additions & 4 deletions comps/embeddings/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ First, you need to start a TEI service.

```bash
your_port=8090
model="BAAI/bge-large-en-v1.5"
model="BAAI/bge-base-en-v1.5"
docker run -p $your_port:80 -v ./data:/data --name tei_server -e http_proxy=$http_proxy -e https_proxy=$https_proxy --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 --model-id $model
```

Expand All @@ -64,7 +64,7 @@ cd langchain
# run with llama_index
cd llama_index
export TEI_EMBEDDING_ENDPOINT="http://localhost:$yourport"
export TEI_EMBEDDING_MODEL_NAME="BAAI/bge-large-en-v1.5"
export TEI_EMBEDDING_MODEL_NAME="BAAI/bge-base-en-v1.5"
python embedding_tei.py
```

Expand All @@ -86,7 +86,7 @@ First, you need to start a TEI service.

```bash
your_port=8090
model="BAAI/bge-large-en-v1.5"
model="BAAI/bge-base-en-v1.5"
docker run -p $your_port:80 -v ./data:/data --name tei_server -e http_proxy=$http_proxy -e https_proxy=$https_proxy --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 --model-id $model
```

Expand All @@ -103,7 +103,7 @@ Export the `TEI_EMBEDDING_ENDPOINT` for later usage:

```bash
export TEI_EMBEDDING_ENDPOINT="http://localhost:$yourport"
export TEI_EMBEDDING_MODEL_NAME="BAAI/bge-large-en-v1.5"
export TEI_EMBEDDING_MODEL_NAME="BAAI/bge-base-en-v1.5"
```

### 2.2 Build Docker Image
Expand Down
2 changes: 1 addition & 1 deletion comps/embeddings/langchain/local_embedding.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,5 +40,5 @@ def embedding(input: TextDoc) -> EmbedDoc:


if __name__ == "__main__":
embeddings = HuggingFaceEmbeddings(model_name="BAAI/bge-large-en-v1.5")
embeddings = HuggingFaceEmbeddings(model_name="BAAI/bge-base-en-v1.5")
opea_microservices["opea_service@local_embedding"].start()
2 changes: 1 addition & 1 deletion comps/embeddings/llama_index/embedding_tei.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ def embedding(input: TextDoc) -> EmbedDoc:


if __name__ == "__main__":
tei_embedding_model_name = os.getenv("TEI_EMBEDDING_MODEL_NAME", "BAAI/bge-large-en-v1.5")
tei_embedding_model_name = os.getenv("TEI_EMBEDDING_MODEL_NAME", "BAAI/bge-base-en-v1.5")
tei_embedding_endpoint = os.getenv("TEI_EMBEDDING_ENDPOINT", "http://localhost:8090")
embeddings = TextEmbeddingsInference(model_name=tei_embedding_model_name, base_url=tei_embedding_endpoint)
logger.info("TEI Gaudi Embedding initialized.")
Expand Down
2 changes: 1 addition & 1 deletion comps/embeddings/llama_index/local_embedding.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,5 +31,5 @@ def embedding(input: TextDoc) -> EmbedDoc:


if __name__ == "__main__":
embeddings = HuggingFaceInferenceAPIEmbedding(model_name="BAAI/bge-large-en-v1.5")
embeddings = HuggingFaceInferenceAPIEmbedding(model_name="BAAI/bge-base-en-v1.5")
opea_microservices["opea_service@local_embedding"].start()
2 changes: 1 addition & 1 deletion comps/reranks/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ export HF_TOKEN=${your_hf_api_token}
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=${your_langchain_api_key}
export LANGCHAIN_PROJECT="opea/reranks"
export RERANK_MODEL_ID="BAAI/bge-reranker-large"
export RERANK_MODEL_ID="BAAI/bge-reranker-base"
revision=refs/pr/4
volume=$PWD/data
docker run -d -p 6060:80 -v $volume:/data -e http_proxy=$http_proxy -e https_proxy=$https_proxy --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.2 --model-id $RERANK_MODEL_ID --revision $revision --hf-api-token $HF_TOKEN
Expand Down
2 changes: 1 addition & 1 deletion comps/reranks/langchain-mosec/mosec-docker/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ RUN pip3 install intel-extension-for-pytorch==2.2.0
RUN pip3 install transformers sentence-transformers
RUN pip3 install llmspec mosec

RUN cd /home/user/ && export HF_ENDPOINT=https://hf-mirror.com && huggingface-cli download --resume-download BAAI/bge-reranker-large --local-dir /home/user/bge-reranker-large
RUN cd /home/user/ && export HF_ENDPOINT=https://hf-mirror.com && huggingface-cli download --resume-download BAAI/bge-reranker-base --local-dir /home/user/bge-reranker-large
USER user
ENV EMB_MODEL="/home/user/bge-reranker-large/"

Expand Down
2 changes: 1 addition & 1 deletion comps/reranks/tei/local_reranking.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,5 +41,5 @@ def reranking(input: SearchedDoc) -> RerankedDoc:


if __name__ == "__main__":
reranker_model = CrossEncoder(model_name="BAAI/bge-reranker-large", max_length=512)
reranker_model = CrossEncoder(model_name="BAAI/bge-reranker-base", max_length=512)
opea_microservices["opea_service@local_reranking"].start()
2 changes: 1 addition & 1 deletion tests/test_embeddings_langchain-mosec.sh
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ function build_docker_images() {

function start_service() {
mosec_endpoint=5001
model="BAAI/bge-large-en-v1.5"
model="BAAI/bge-base-en-v1.5"
unset http_proxy
docker run -d --name="test-comps-embedding-langchain-mosec-endpoint" -p $mosec_endpoint:8000 opea/embedding-langchain-mosec-endpoint:comps
export MOSEC_EMBEDDING_ENDPOINT="http://${ip_address}:${mosec_endpoint}"
Expand Down
4 changes: 2 additions & 2 deletions tests/test_embeddings_langchain.sh
Original file line number Diff line number Diff line change
Expand Up @@ -21,10 +21,10 @@ function build_docker_images() {

function start_service() {
tei_endpoint=5001
model="BAAI/bge-large-en-v1.5"
model="BAAI/bge-base-en-v1.5"
revision="refs/pr/5"
unset http_proxy
docker run -d --name="test-comps-embedding-tei-endpoint" -p $tei_endpoint:80 -v ./data:/data --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.2 --model-id $model --revision $revision
docker run -d --name="test-comps-embedding-tei-endpoint" -p $tei_endpoint:80 -v ./data:/data --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 --model-id $model
export TEI_EMBEDDING_ENDPOINT="http://${ip_address}:${tei_endpoint}"
tei_service_port=5002
docker run -d --name="test-comps-embedding-tei-server" -e http_proxy=$http_proxy -e https_proxy=$https_proxy -p ${tei_service_port}:6000 --ipc=host -e TEI_EMBEDDING_ENDPOINT=$TEI_EMBEDDING_ENDPOINT opea/embedding-tei:comps
Expand Down
4 changes: 2 additions & 2 deletions tests/test_embeddings_llama_index.sh
Original file line number Diff line number Diff line change
Expand Up @@ -22,9 +22,9 @@ function build_docker_images() {

function start_service() {
tei_endpoint=5001
model="BAAI/bge-large-en-v1.5"
model="BAAI/bge-base-en-v1.5"
revision="refs/pr/5"
docker run -d --name="test-comps-embedding-tei-llama-index-endpoint" -p $tei_endpoint:80 -v ./data:/data -e http_proxy=$http_proxy -e https_proxy=$https_proxy --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.2 --model-id $model --revision $revision
docker run -d --name="test-comps-embedding-tei-llama-index-endpoint" -p $tei_endpoint:80 -v ./data:/data -e http_proxy=$http_proxy -e https_proxy=$https_proxy --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 --model-id $model
export TEI_EMBEDDING_ENDPOINT="http://${ip_address}:${tei_endpoint}"
tei_service_port=5010
docker run -d --name="test-comps-embedding-tei-llama-index-server" -e http_proxy=$http_proxy -e https_proxy=$https_proxy -p ${tei_service_port}:6000 --ipc=host -e TEI_EMBEDDING_ENDPOINT=$TEI_EMBEDDING_ENDPOINT opea/embedding-tei-llama-index:comps
Expand Down
2 changes: 1 addition & 1 deletion tests/test_reranks_langchain-mosec.sh
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ function build_docker_images() {

function start_service() {
mosec_endpoint=5006
model="BAAI/bge-reranker-large"
model="BAAI/bge-reranker-base"
unset http_proxy
docker run -d --name="test-comps-reranking-langchain-mosec-endpoint" -p $mosec_endpoint:8000 opea/reranking-langchain-mosec-endpoint:comps
export MOSEC_RERANKING_ENDPOINT="http://${ip_address}:${mosec_endpoint}"
Expand Down
4 changes: 2 additions & 2 deletions tests/test_reranks_tei.sh
Original file line number Diff line number Diff line change
Expand Up @@ -21,10 +21,10 @@ function start_service() {
tei_endpoint=5006
# Remember to set HF_TOKEN before invoking this test!
export HF_TOKEN=${HF_TOKEN}
model=BAAI/bge-reranker-large
model=BAAI/bge-reranker-base
revision=refs/pr/4
volume=$PWD/data
docker run -d --name="test-comps-reranking-tei-endpoint" -p $tei_endpoint:80 -v $volume:/data --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.2 --model-id $model --revision $revision
docker run -d --name="test-comps-reranking-tei-endpoint" -p $tei_endpoint:80 -v $volume:/data --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 --model-id $model

export TEI_RERANKING_ENDPOINT="http://${ip_address}:${tei_endpoint}"
tei_service_port=5007
Expand Down

0 comments on commit 48d4e53

Please sign in to comment.