Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refine READMEs after reorg #666

Merged
merged 11 commits into from
Sep 11, 2024
18 changes: 13 additions & 5 deletions comps/dataprep/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,20 +19,28 @@ export SUMMARIZE_IMAGE_VIA_LVM=1

## Dataprep Microservice with Redis

For details, please refer to this [langchain readme](langchain/redis/README.md) or [llama index readme](llama_index/redis/README.md)
For details, please refer to this [readme](redis/README.md)

## Dataprep Microservice with Milvus

For details, please refer to this [readme](langchain/milvus/README.md)
For details, please refer to this [readme](milvus/langchain/README.md)

## Dataprep Microservice with Qdrant

For details, please refer to this [readme](langchain/qdrant/README.md)
For details, please refer to this [readme](qdrant/langchain/README.md)

## Dataprep Microservice with Pinecone

For details, please refer to this [readme](langchain/pinecone/README.md)
For details, please refer to this [readme](pinecone/langchain/README.md)

## Dataprep Microservice with PGVector

For details, please refer to this [readme](langchain/pgvector/README.md)
For details, please refer to this [readme](pgvector/langchain/README.md)

## Dataprep Microservice with VDMS

For details, please refer to this [readme](vdms/README.md)

## Dataprep Microservice with Multimodal

For details, please refer to this [readme](multimodal/redis/langchain/README.md)
12 changes: 6 additions & 6 deletions comps/dataprep/redis/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Dataprep Microservice with Redis

We have provided dataprep microservice for multimodal data input (e.g., text and image) [here](../../multimodal/redis/langchain/README.md).
We have provided dataprep microservice for multimodal data input (e.g., text and image) [here](../multimodal/redis/langchain/README.md).

For dataprep microservice for text input, we provide here two frameworks: `Langchain` and `LlamaIndex`. We also provide `Langchain_ray` which uses ray to parallel the data prep for multi-file performance improvement(observed 5x - 15x speedup by processing 1000 files/links.).

Expand Down Expand Up @@ -33,7 +33,7 @@ cd langchain_ray; pip install -r requirements_ray.txt

### 1.2 Start Redis Stack Server

Please refer to this [readme](../../../vectorstores/redis/README.md).
Please refer to this [readme](../../vectorstores/redis/README.md).

### 1.3 Setup Environment Variables

Expand Down Expand Up @@ -90,7 +90,7 @@ python prepare_doc_redis_on_ray.py

### 2.1 Start Redis Stack Server

Please refer to this [readme](../../../vectorstores/redis/README.md).
Please refer to this [readme](../../vectorstores/redis/README.md).

### 2.2 Setup Environment Variables

Expand All @@ -109,21 +109,21 @@ export HUGGINGFACEHUB_API_TOKEN=${your_hf_api_token}
- option 1: Start single-process version (for 1-10 files processing)

```bash
cd ../../../
cd ../../
docker build -t opea/dataprep-redis:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/dataprep/redis/langchain/Dockerfile .
```

- Build docker image with llama_index

```bash
cd ../../../
cd ../../
docker build -t opea/dataprep-redis:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/dataprep/redis/llama_index/Dockerfile .
```

- option 2: Start multi-process version (for >10 files processing)

```bash
cd ../../../../
cd ../../../
docker build -t opea/dataprep-on-ray-redis:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/dataprep/redis/langchain_ray/Dockerfile .
```

Expand Down
6 changes: 2 additions & 4 deletions comps/dataprep/vdms/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ cd langchain_ray; pip install -r requirements_ray.txt

## 1.2 Start VDMS Server

Please refer to this [readme](../../vectorstores/langchain/vdms/README.md).
Please refer to this [readme](../../vectorstores/vdms/README.md).

## 1.3 Setup Environment Variables

Expand All @@ -37,8 +37,6 @@ export https_proxy=${your_http_proxy}
export VDMS_HOST=${host_ip}
export VDMS_PORT=55555
export COLLECTION_NAME=${your_collection_name}
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_PROJECT="opea/gen-ai-comps:dataprep"
export PYTHONPATH=${path_to_comps}
```

Expand All @@ -62,7 +60,7 @@ python prepare_doc_redis_on_ray.py

## 2.1 Start VDMS Server

Please refer to this [readme](../../vectorstores/langchain/vdms/README.md).
Please refer to this [readme](../../vectorstores/vdms/README.md).

## 2.2 Setup Environment Variables

Expand Down
12 changes: 4 additions & 8 deletions comps/embeddings/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,20 +18,16 @@ Users are albe to configure and build embedding-related services according to th

We support both `langchain` and `llama_index` for TEI serving.

For details, please refer to [langchain readme](langchain/tei/README.md) or [llama index readme](llama_index/tei/README.md).
For details, please refer to [langchain readme](tei/langchain/README.md) or [llama index readme](tei/llama_index/README.md).

## Embeddings Microservice with Mosec

For details, please refer to this [readme](langchain/mosec/README.md).
For details, please refer to this [readme](mosec/langchain/README.md).

## Embeddings Microservice with Neural Speed
## Embeddings Microservice with Multimodal

For details, please refer to this [readme](neural-speed/README.md).
For details, please refer to this [readme](multimodal/README.md).

## Embeddings Microservice with Multimodal Clip

For details, please refer to this [readme](multimodal_clip/README.md).

## Embeddings Microservice with Multimodal Langchain

For details, please refer to this [readme](multimodal_embeddings/README.md).
2 changes: 1 addition & 1 deletion comps/guardrails/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ The Guardrails service enhances the security of LLM-based applications by offeri

| MicroService | Description |
| ---------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------ |
| [Llama Guard](./llama_guard/README.md) | Provides guardrails for inputs and outputs to ensure safe interactions |
| [Llama Guard](./llama_guard/langchain/README.md) | Provides guardrails for inputs and outputs to ensure safe interactions |
| [PII Detection](./pii_detection/README.md) | Detects Personally Identifiable Information (PII) and Business Sensitive Information (BSI) |
| [Toxicity Detection](./toxicity_detection/README.md) | Detects Toxic language (rude, disrespectful, or unreasonable language that is likely to make someone leave a discussion) |

Expand Down
2 changes: 1 addition & 1 deletion comps/guardrails/llama_guard/langchain/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ export LLM_MODEL_ID=${your_hf_llm_model}
### 2.2 Build Docker Image

```bash
cd ../../
cd ../../../../
docker build -t opea/guardrails-tgi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/guardrails/llama_guard/langchain/Dockerfile .
```

Expand Down
5 changes: 2 additions & 3 deletions comps/intent_detection/langchain/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ export TGI_LLM_ENDPOINT="http://${your_ip}:8008"
Start intent detection microservice with below command.

```bash
cd /your_project_path/GenAIComps/
cd ../../../
cp comps/intent_detection/langchain/intent_detection.py .
python intent_detection.py
```
Expand All @@ -55,7 +55,7 @@ export TGI_LLM_ENDPOINT="http://${your_ip}:8008"
### 2.3 Build Docker Image

```bash
cd /your_project_path/GenAIComps
cd ../../../
docker build --no-cache -t opea/llm-tgi:latest -f comps/intent_detection/langchain/Dockerfile .
```

Expand All @@ -68,7 +68,6 @@ docker run -it --name="intent-tgi-server" --net=host --ipc=host -e http_proxy=$h
### 2.5 Run with Docker Compose (Option B)

```bash
cd /your_project_path/GenAIComps/comps/intent_detection/langchain
export LLM_MODEL_ID=${your_hf_llm_model}
export http_proxy=${your_http_proxy}
export https_proxy=${your_http_proxy}
Expand Down
2 changes: 1 addition & 1 deletion comps/knowledgegraphs/langchain/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ curl $LLM_ENDPOINT/generate \
### 1.4 Start Microservice

```bash
cd ../..
cd ../../../
docker build -t opea/knowledge_graphs:latest \
--build-arg https_proxy=$https_proxy \
--build-arg http_proxy=$http_proxy \
Expand Down
3 changes: 1 addition & 2 deletions comps/llms/faq-generation/tgi/langchain/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ export LLM_MODEL_ID=${your_hf_llm_model}
### 1.2 Build Docker Image

```bash
cd ../../../../
cd ../../../../../
docker build -t opea/llm-faqgen-tgi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/llms/faq-generation/tgi/langchain/Dockerfile .
```

Expand All @@ -43,7 +43,6 @@ docker run -d --name="llm-faqgen-server" -p 9000:9000 --ipc=host -e http_proxy=$
### 1.4 Run Docker with Docker Compose (Option B)

```bash
cd faq-generation/tgi/docker
docker compose -f docker_compose_llm.yaml up -d
```

Expand Down
2 changes: 1 addition & 1 deletion comps/llms/summarization/tgi/langchain/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ export LLM_MODEL_ID=${your_hf_llm_model}
### 2.2 Build Docker Image

```bash
cd ../../
cd ../../../../../
docker build -t opea/llm-docsum-tgi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/llms/summarization/tgi/langchain/Dockerfile .
```

Expand Down
8 changes: 4 additions & 4 deletions comps/llms/text-generation/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ export CHAT_PROCESSOR="ChatModelLlama"
#### 2.2.1 TGI

```bash
cd ../../
cd ../../../
docker build -t opea/llm-tgi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/llms/text-generation/tgi/Dockerfile .
```

Expand All @@ -155,7 +155,7 @@ bash build_docker_vllm.sh
Build microservice docker.

```bash
cd ../../
cd ../../../
docker build -t opea/llm-vllm:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/llms/text-generation/vllm/langchain/Dockerfile .
```

Expand All @@ -171,8 +171,8 @@ bash build_docker_vllmray.sh
Build microservice docker.

```bash
cd ../../
docker build -t opea/llm-ray:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/llms/text-generation/vllm/Dockerfile .
cd ../../../
docker build -t opea/llm-ray:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/llms/text-generation/vllm/ray/Dockerfile .
```

To start a docker container, you have two options:
Expand Down
4 changes: 2 additions & 2 deletions comps/llms/text-generation/tgi/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ curl http://${your_ip}:8008/generate \

```bash
export TGI_LLM_ENDPOINT="http://${your_ip}:8008"
python text-generation/tgi/llm.py
python llm.py
```

## 🚀2. Start Microservice with Docker (Option 2)
Expand All @@ -52,7 +52,7 @@ export LLM_MODEL_ID=${your_hf_llm_model}
### 2.2 Build Docker Image

```bash
cd ../../
cd ../../../../
docker build -t opea/llm-tgi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/llms/text-generation/tgi/Dockerfile .
```

Expand Down
25 changes: 0 additions & 25 deletions comps/llms/text-generation/vllm/README.md

This file was deleted.

4 changes: 2 additions & 2 deletions comps/lvms/llava/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,14 +63,14 @@ docker build -t opea/llava:latest --build-arg https_proxy=$https_proxy --build-a
- Gaudi2 HPU

```bash
cd ../..
cd ../../../
docker build -t opea/llava:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/lvms/llava/dependency/Dockerfile.intel_hpu .
```

#### 2.1.2 LVM Service Image

```bash
cd ../..
cd ../../../
docker build -t opea/lvm:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/lvms/llava/Dockerfile .
```

Expand Down
2 changes: 1 addition & 1 deletion comps/reranks/fastrag/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ export EMBED_MODEL="Intel/bge-small-en-v1.5-rag-int8-static"
### 2.2 Build Docker Image

```bash
cd ../../
cd ../../../
docker build -t opea/reranking-fastrag:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/reranks/fastrag/Dockerfile .
```

Expand Down
2 changes: 1 addition & 1 deletion comps/reranks/tei/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ export TEI_RERANKING_ENDPOINT="http://${your_ip}:8808"
### 2.2 Build Docker Image

```bash
cd ../../
cd ../../../
docker build -t opea/reranking-tei:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/reranks/tei/Dockerfile .
```

Expand Down
18 changes: 13 additions & 5 deletions comps/retrievers/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,20 +8,28 @@ Overall, this microservice provides robust backend support for applications requ

## Retriever Microservice with Redis

For details, please refer to this [readme](redis/README.md)
For details, please refer to this [langchain readme](redis/langchain/README.md) or [llama_index readme](redis/llama_index/README.md)

## Retriever Microservice with Milvus

For details, please refer to this [readme](milvus/README.md)
For details, please refer to this [readme](milvus/langchain/README.md)

## Retriever Microservice with PGVector

For details, please refer to this [readme](pgvector/README.md)
For details, please refer to this [readme](pgvector/langchain/README.md)

## Retriever Microservice with Pathway

For details, please refer to this [readme](pathway/README.md)
For details, please refer to this [readme](pathway/langchain/README.md)

## Retriever Microservice with QDrant

For details, please refer to this [readme](qdrant/haystack/README.md)

## Retriever Microservice with VDMS

For details, please refer to this [readme](vdms/README.md)
For details, please refer to this [readme](vdms/langchain/README.md)

## Retriever Microservice with Multimodal

For details, please refer to this [readme](multimodal/redis/langchain/README.md)
Loading