Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

doc: fix multiple H1 headings #481

Merged
merged 2 commits into from
Aug 17, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 8 additions & 8 deletions comps/agent/langchain/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,32 +4,32 @@ The langchain agent model refers to a framework that integrates the reasoning ca

![Architecture Overview](agent_arch.jpg)

# 🚀1. Start Microservice with Python(Option 1)
## 🚀1. Start Microservice with Python(Option 1)

## 1.1 Install Requirements
### 1.1 Install Requirements

```bash
cd comps/agent/langchain/
pip install -r requirements.txt
```

## 1.2 Start Microservice with Python Script
### 1.2 Start Microservice with Python Script

```bash
cd comps/agent/langchain/
python agent.py
```

# 🚀2. Start Microservice with Docker (Option 2)
## 🚀2. Start Microservice with Docker (Option 2)

## Build Microservices
### Build Microservices

```bash
cd GenAIComps/ # back to GenAIComps/ folder
docker build -t opea/comps-agent-langchain:latest -f comps/agent/langchain/docker/Dockerfile .
```

## start microservices
### start microservices

```bash
export ip_address=$(hostname -I | awk '{print $1}')
Expand All @@ -56,7 +56,7 @@ docker logs comps-langchain-agent-endpoint
> docker run --rm --runtime=runc --name="comps-langchain-agent-endpoint" -v ./comps/agent/langchain/:/home/user/comps/agent/langchain/ -p 9090:9090 --ipc=host -e http_proxy=$http_proxy -e https_proxy=$https_proxy -e HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN} --env-file ${agent_env} opea/comps-agent-langchain:latest
> ```

# 🚀3. Validate Microservice
## 🚀3. Validate Microservice

Once microservice starts, user can use below script to invoke.

Expand All @@ -73,7 +73,7 @@ data: [DONE]

```

# 🚀4. Provide your own tools
## 🚀4. Provide your own tools

- Define tools

Expand Down
24 changes: 12 additions & 12 deletions comps/asr/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,17 +2,17 @@

ASR (Audio-Speech-Recognition) microservice helps users convert speech to text. When building a talking bot with LLM, users will need to convert their audio inputs (What they talk, or Input audio from other sources) to text, so the LLM is able to tokenize the text and generate an answer. This microservice is built for that conversion stage.

# 🚀1. Start Microservice with Python (Option 1)
## 🚀1. Start Microservice with Python (Option 1)

To start the ASR microservice with Python, you need to first install python packages.

## 1.1 Install Requirements
### 1.1 Install Requirements

```bash
pip install -r requirements.txt
```

## 1.2 Start Whisper Service/Test
### 1.2 Start Whisper Service/Test

- Xeon CPU

Expand Down Expand Up @@ -40,7 +40,7 @@ nohup python whisper_server.py --device=hpu &
python check_whisper_server.py
```

## 1.3 Start ASR Service/Test
### 1.3 Start ASR Service/Test

```bash
cd ../
Expand All @@ -54,13 +54,13 @@ While the Whisper service is running, you can start the ASR service. If the ASR
{'id': '0e686efd33175ce0ebcf7e0ed7431673', 'text': 'who is pat gelsinger'}
```

# 🚀2. Start Microservice with Docker (Option 2)
## 🚀2. Start Microservice with Docker (Option 2)

Alternatively, you can also start the ASR microservice with Docker.

## 2.1 Build Images
### 2.1 Build Images

### 2.1.1 Whisper Server Image
#### 2.1.1 Whisper Server Image

- Xeon CPU

Expand All @@ -76,15 +76,15 @@ cd ../..
docker build -t opea/whisper-gaudi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/asr/whisper/Dockerfile_hpu .
```

### 2.1.2 ASR Service Image
#### 2.1.2 ASR Service Image

```bash
docker build -t opea/asr:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/asr/Dockerfile .
```

## 2.2 Start Whisper and ASR Service
### 2.2 Start Whisper and ASR Service

### 2.2.1 Start Whisper Server
#### 2.2.1 Start Whisper Server

- Xeon

Expand All @@ -98,15 +98,15 @@ docker run -p 7066:7066 --ipc=host -e http_proxy=$http_proxy -e https_proxy=$htt
docker run -p 7066:7066 --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --ipc=host -e http_proxy=$http_proxy -e https_proxy=$https_proxy opea/whisper-gaudi:latest
```

### 2.2.2 Start ASR service
#### 2.2.2 Start ASR service

```bash
ip_address=$(hostname -I | awk '{print $1}')

docker run -d -p 9099:9099 --ipc=host -e http_proxy=$http_proxy -e https_proxy=$https_proxy -e ASR_ENDPOINT=http://$ip_address:7066 opea/asr:latest
```

### 2.2.3 Test
#### 2.2.3 Test

```bash
# Use curl or python
Expand Down
8 changes: 4 additions & 4 deletions comps/chathistory/mongo/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,16 +17,16 @@ export DB_NAME=${DB_NAME}
export COLLECTION_NAME=${COLLECTION_NAME}
```

# 🚀Start Microservice with Docker
## 🚀Start Microservice with Docker

## Build Docker Image
### Build Docker Image

```bash
cd ../../../../
docker build -t opea/chathistory-mongo-server:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/chathistory/mongo/docker/Dockerfile .
```

## Run Docker with CLI
### Run Docker with CLI

- Run mongoDB image

Expand All @@ -40,7 +40,7 @@ docker run -d -p 27017:27017 --name=mongo mongo:latest
docker run -d --name="chathistory-mongo-server" -p 6013:6013 -p 6012:6012 -p 6014:6014 -e http_proxy=$http_proxy -e https_proxy=$https_proxy -e no_proxy=$no_proxy -e MONGO_HOST=${MONGO_HOST} -e MONGO_PORT=${MONGO_PORT} -e DB_NAME=${DB_NAME} -e COLLECTION_NAME=${COLLECTION_NAME} opea/chathistory-mongo-server:latest
```

# Invoke Microservice
## Invoke Microservice

Once chathistory service is up and running, users can update the database by using the below API endpoint. The API returns a unique UUID for the saved conversation.

Expand Down
10 changes: 5 additions & 5 deletions comps/dataprep/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,22 +17,22 @@ Occasionally unstructured data will contain image data, to convert the image dat
export SUMMARIZE_IMAGE_VIA_LVM=1
```

# Dataprep Microservice with Redis
## Dataprep Microservice with Redis

For details, please refer to this [readme](redis/README.md)

# Dataprep Microservice with Milvus
## Dataprep Microservice with Milvus

For details, please refer to this [readme](milvus/README.md)

# Dataprep Microservice with Qdrant
## Dataprep Microservice with Qdrant

For details, please refer to this [readme](qdrant/README.md)

# Dataprep Microservice with Pinecone
## Dataprep Microservice with Pinecone

For details, please refer to this [readme](pinecone/README.md)

# Dataprep Microservice with PGVector
## Dataprep Microservice with PGVector

For details, please refer to this [readme](pgvector/README.md)
18 changes: 9 additions & 9 deletions comps/dataprep/milvus/README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# Dataprep Microservice with Milvus

# 🚀Start Microservice with Python
## 🚀Start Microservice with Python

## Install Requirements
### Install Requirements

```bash
pip install -r requirements.txt
Expand All @@ -11,11 +11,11 @@ apt-get install libtesseract-dev -y
apt-get install poppler-utils -y
```

## Start Milvus Server
### Start Milvus Server

Please refer to this [readme](../../../vectorstores/langchain/milvus/README.md).

## Setup Environment Variables
### Setup Environment Variables

```bash
export no_proxy=${your_no_proxy}
Expand All @@ -27,30 +27,30 @@ export COLLECTION_NAME=${your_collection_name}
export MOSEC_EMBEDDING_ENDPOINT=${your_embedding_endpoint}
```

## Start Document Preparation Microservice for Milvus with Python Script
### Start Document Preparation Microservice for Milvus with Python Script

Start document preparation microservice for Milvus with below command.

```bash
python prepare_doc_milvus.py
```

# 🚀Start Microservice with Docker
## 🚀Start Microservice with Docker

## Build Docker Image
### Build Docker Image

```bash
cd ../../../../
docker build -t opea/dataprep-milvus:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy --build-arg no_proxy=$no_proxy -f comps/dataprep/milvus/docker/Dockerfile .
```

## Run Docker with CLI
### Run Docker with CLI

```bash
docker run -d --name="dataprep-milvus-server" -p 6010:6010 --ipc=host -e http_proxy=$http_proxy -e https_proxy=$https_proxy -e no_proxy=$no_proxy -e MOSEC_EMBEDDING_ENDPOINT=${your_embedding_endpoint} -e MILVUS=${your_milvus_host_ip} opea/dataprep-milvus:latest
```

# Invoke Microservice
## Invoke Microservice

Once document preparation microservice for Milvus is started, user can use below command to invoke the microservice to convert the document to embedding and save to the database.

Expand Down
30 changes: 15 additions & 15 deletions comps/dataprep/pgvector/README.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
# Dataprep Microservice with PGVector

# 🚀1. Start Microservice with Python(Option 1)
## 🚀1. Start Microservice with Python(Option 1)

## 1.1 Install Requirements
### 1.1 Install Requirements

```bash
pip install -r requirements.txt
```

## 1.2 Setup Environment Variables
### 1.2 Setup Environment Variables

```bash
export PG_CONNECTION_STRING=postgresql+psycopg2://testuser:testpwd@${your_ip}:5432/vectordb
Expand All @@ -18,25 +18,25 @@ export LANGCHAIN_API_KEY=${your_langchain_api_key}
export LANGCHAIN_PROJECT="opea/gen-ai-comps:dataprep"
```

## 1.3 Start PGVector
### 1.3 Start PGVector

Please refer to this [readme](../../vectorstores/langchain/pgvector/README.md).

## 1.4 Start Document Preparation Microservice for PGVector with Python Script
### 1.4 Start Document Preparation Microservice for PGVector with Python Script

Start document preparation microservice for PGVector with below command.

```bash
python prepare_doc_pgvector.py
```

# 🚀2. Start Microservice with Docker (Option 2)
## 🚀2. Start Microservice with Docker (Option 2)

## 2.1 Start PGVector
### 2.1 Start PGVector

Please refer to this [readme](../../vectorstores/langchain/pgvector/README.md).

## 2.2 Setup Environment Variables
### 2.2 Setup Environment Variables

```bash
export PG_CONNECTION_STRING=postgresql+psycopg2://testuser:testpwd@${your_ip}:5432/vectordb
Expand All @@ -46,29 +46,29 @@ export LANGCHAIN_API_KEY=${your_langchain_api_key}
export LANGCHAIN_PROJECT="opea/dataprep"
```

## 2.3 Build Docker Image
### 2.3 Build Docker Image

```bash
cd GenAIComps
docker build -t opea/dataprep-pgvector:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/dataprep/pgvector/langchain/docker/Dockerfile .
```

## 2.4 Run Docker with CLI (Option A)
### 2.4 Run Docker with CLI (Option A)

```bash
docker run --name="dataprep-pgvector" -p 6007:6007 --ipc=host -e http_proxy=$http_proxy -e https_proxy=$https_proxy -e PG_CONNECTION_STRING=$PG_CONNECTION_STRING -e INDEX_NAME=$INDEX_NAME -e TEI_ENDPOINT=$TEI_ENDPOINT opea/dataprep-pgvector:latest
```

## 2.5 Run with Docker Compose (Option B)
### 2.5 Run with Docker Compose (Option B)

```bash
cd comps/dataprep/langchain/pgvector/docker
docker compose -f docker-compose-dataprep-pgvector.yaml up -d
```

# 🚀3. Consume Microservice
## 🚀3. Consume Microservice

## 3.1 Consume Upload API
### 3.1 Consume Upload API

Once document preparation microservice for PGVector is started, user can use below command to invoke the microservice to convert the document to embedding and save to the database.

Expand All @@ -79,7 +79,7 @@ curl -X POST \
http://localhost:6007/v1/dataprep
```

## 3.2 Consume get_file API
### 3.2 Consume get_file API

To get uploaded file structures, use the following command:

Expand Down Expand Up @@ -108,7 +108,7 @@ Then you will get the response JSON like this:
]
```

## 4.3 Consume delete_file API
### 4.3 Consume delete_file API

To delete uploaded file/link, use the following command.

Expand Down
Loading