Skip to content

Commit

Permalink
Refine README of Examples (opea-project#420)
Browse files Browse the repository at this point in the history
* update chatqna readme and set env script

Signed-off-by: letonghan <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* update for comments

Signed-off-by: letonghan <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* add consume

Signed-off-by: letonghan <[email protected]>

* modify details

Signed-off-by: letonghan <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* update codegen readme

Signed-off-by: letonghan <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* add patch modifications

Signed-off-by: letonghan <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* update codegen readme

Signed-off-by: letonghan <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* update ui options

Signed-off-by: letonghan <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* udpate codetrans readme

Signed-off-by: letonghan <[email protected]>

* update docsum & searchqna readme

Signed-off-by: letonghan <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: letonghan <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
  • Loading branch information
letonghan and pre-commit-ci[bot] authored Jul 18, 2024
1 parent 9a453d5 commit 755bbf8
Show file tree
Hide file tree
Showing 10 changed files with 539 additions and 12 deletions.
104 changes: 101 additions & 3 deletions ChatQnA/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,17 +18,81 @@ This ChatQnA use case performs RAG using LangChain, Redis VectorDB and Text Gene

The ChatQnA service can be effortlessly deployed on either Intel Gaudi2 or Intel XEON Scalable Processors.

Currently we support two ways of deploying ChatQnA services with docker compose:

1. Start services using the docker image on `docker hub`:

```bash
docker pull opea/chatqna:latest
```

Two type of UI are supported now, choose one you like and pull the referred docker image.

If you choose conversational UI, follow the [instruction](https://github.com/opea-project/GenAIExamples/tree/main/ChatQnA/docker/gaudi#-launch-the-conversational-ui-optional) and modify the [docker_compose.yaml](./docker/xeon/docker_compose.yaml).

```bash
docker pull opea/chatqna-ui:latest
# or
docker pull opea/chatqna-conversation-ui:latest
```

2. Start services using the docker images `built from source`: [Guide](./docker)

## Setup Environment Variable

To set up environment variables for deploying ChatQnA services, follow these steps:

1. Set the required environment variables:

```bash
export host_ip="External_Public_IP"
export no_proxy="Your_No_Proxy"
export HUGGINGFACEHUB_API_TOKEN="Your_Huggingface_API_Token"
```

2. If you are in a proxy environment, also set the proxy-related environment variables:

```bash
export http_proxy="Your_HTTP_Proxy"
export https_proxy="Your_HTTPs_Proxy"
```

3. Set up other environment variables:

```bash
bash ./docker/set_env.sh
```

## Deploy ChatQnA on Gaudi

Refer to the [Gaudi Guide](./docker/gaudi/README.md) for instructions on deploying ChatQnA on Gaudi.
If your version of `Habana Driver` < 1.16.0 (check with `hl-smi`), run the following command directly to start ChatQnA services. Please find corresponding [docker_compose.yaml](./docker/gaudi/docker_compose.yaml).

```bash
cd GenAIExamples/ChatQnA/docker/gaudi/
docker compose -f docker_compose.yaml up -d
```

If your version of `Habana Driver` >= 1.16.0, refer to the [Gaudi Guide](./docker/gaudi/README.md) to build docker images from source.

## Deploy ChatQnA on Xeon

Refer to the [Xeon Guide](./docker/xeon/README.md) for instructions on deploying ChatQnA on Xeon.
Please find corresponding [docker_compose.yaml](./docker/xeon/docker_compose.yaml).

```bash
cd GenAIExamples/ChatQnA/docker/xeon/
docker compose -f docker_compose.yaml up -d
```

Refer to the [Xeon Guide](./docker/xeon/README.md) for more instructions on building docker images from source.

## Deploy ChatQnA on NVIDIA GPU

Refer to the [NVIDIA GPU Guide](./docker/gpu/README.md) for instructions on deploying ChatQnA on NVIDIA GPU.
```bash
cd GenAIExamples/ChatQnA/docker/gpu/
docker compose -f docker_compose.yaml up -d
```

Refer to the [NVIDIA GPU Guide](./docker/gpu/README.md) for more instructions on building docker images from source.

## Deploy ChatQnA into Kubernetes on Xeon & Gaudi

Expand All @@ -37,3 +101,37 @@ Refer to the [Kubernetes Guide](./kubernetes/manifests/README.md) for instructio
## Deploy ChatQnA on AI PC

Refer to the [AI PC Guide](./docker/aipc/README.md) for instructions on deploying ChatQnA on AI PC.

# Consume ChatQnA Service

Two ways of consuming ChatQnA Service:

1. Use cURL command on terminal

```bash
curl http://${host_ip}:8888/v1/chatqna \
-H "Content-Type: application/json" \
-d '{
"messages": "What is the revenue of Nike in 2023?"
}'
```

2. Access via frontend

To access the frontend, open the following URL in your browser: `http://{host_ip}:5173`

By default, the UI runs on port 5173 internally.

If you choose conversational UI, use this URL: `http://{host_ip}:5174`

# Troubleshooting

1. If you get errors like "Access Denied", please [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/ChatQnA/docker/xeon#validate-microservices) first. A simple example:

```bash
http_proxy="" curl ${host_ip}:6006/embed -X POST -d '{"inputs":"What is Deep Learning?"}' -H 'Content-Type: application/json'
```

2. (Docker only) If all microservices work well, please check the port ${host_ip}:8888, the port may be allocated by other users, you can modify the `docker_compose.yaml`.

3. (Docker only) If you get errors like "The container name is in use", please change container name in `docker_compose.yaml`.
23 changes: 23 additions & 0 deletions ChatQnA/docker/set_env.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
#!/usr/bin/env bash

# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0


export EMBEDDING_MODEL_ID="BAAI/bge-base-en-v1.5"
export RERANK_MODEL_ID="BAAI/bge-reranker-base"
export LLM_MODEL_ID="Intel/neural-chat-7b-v3-3"
export TEI_EMBEDDING_ENDPOINT="http://${host_ip}:8090"
export TEI_RERANKING_ENDPOINT="http://${host_ip}:8808"
export TGI_LLM_ENDPOINT="http://${host_ip}:8008"
export REDIS_URL="redis://${host_ip}:6379"
export INDEX_NAME="rag-redis"
export MEGA_SERVICE_HOST_IP=${host_ip}
export EMBEDDING_SERVICE_HOST_IP=${host_ip}
export RETRIEVER_SERVICE_HOST_IP=${host_ip}
export RERANK_SERVICE_HOST_IP=${host_ip}
export LLM_SERVICE_HOST_IP=${host_ip}
export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:8888/v1/chatqna"
export DATAPREP_SERVICE_ENDPOINT="http://${host_ip}:6007/v1/dataprep"
export DATAPREP_GET_FILE_ENDPOINT="http://${host_ip}:6008/v1/dataprep/get_file"
export DATAPREP_DELETE_FILE_ENDPOINT="http://${host_ip}:6009/v1/dataprep/delete_file"
91 changes: 89 additions & 2 deletions CodeGen/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,12 +22,99 @@ The workflow falls into the following architecture:

The CodeGen service can be effortlessly deployed on either Intel Gaudi2 or Intel Xeon Scalable Processor.

Currently we support two ways of deploying ChatQnA services with docker compose:

1. Start services using the docker image on `docker hub`:

```bash
docker pull opea/codegen:latest
```

2. Start services using the docker images `built from source`: [Guide](./docker)

## Setup Environment Variable

To set up environment variables for deploying ChatQnA services, follow these steps:

1. Set the required environment variables:

```bash
export host_ip="External_Public_IP"
export no_proxy="Your_No_Proxy"
```

2. If you are in a proxy environment, also set the proxy-related environment variables:

```bash
export http_proxy="Your_HTTP_Proxy"
export https_proxy="Your_HTTPs_Proxy"
export HUGGINGFACEHUB_API_TOKEN="Your_Huggingface_API_Token"
```

3. Set up other environment variables:

```bash
bash ./docker/set_env.sh
```

## Deploy CodeGen using Docker

- Refer to the [Gaudi Guide](./docker/gaudi/README.md) for instructions on deploying CodeGen on Gaudi.
### Deploy CodeGen on Gaudi

- If your version of `Habana Driver` < 1.16.0 (check with `hl-smi`), run the following command directly to start ChatQnA services. Please find corresponding [docker_compose.yaml](./docker/gaudi/docker_compose.yaml).

```bash
cd GenAIExamples/CodeGen/docker/gaudi
docker compose -f docker_compose.yaml up -d
```

- If your version of `Habana Driver` >= 1.16.0, refer to the [Gaudi Guide](./docker/gaudi/README.md) to build docker images from source.

- Refer to the [Xeon Guide](./docker/xeon/README.md) for instructions on deploying CodeGen on Xeon.
### Deploy CodeGen on Xeon

Please find corresponding [docker_compose.yaml](./docker/xeon/docker_compose.yaml).

```bash
cd GenAIExamples/CodeGen/docker/xeon
docker compose -f docker_compose.yaml up -d
```

Refer to the [Xeon Guide](./docker/xeon/README.md) for more instructions on building docker images from source.

## Deploy CodeGen using Kubernetes

Refer to the [Kubernetes Guide](./kubernetes/manifests/README.md) for instructions on deploying CodeGen into Kubernetes on Xeon & Gaudi.

# Consume CodeGen Service

Two ways of consuming CodeGen Service:

1. Use cURL command on terminal

```bash
curl http://${host_ip}:7778/v1/codegen \
-H "Content-Type: application/json" \
-d '{"messages": "Implement a high-level API for a TODO list application. The API takes as input an operation request and updates the TODO list in place. If the request is invalid, raise an exception."}'
```

2. Access via frontend

To access the frontend, open the following URL in your browser: http://{host_ip}:5173.

By default, the UI runs on port 5173 internally.

# Troubleshooting

1. If you get errors like "Access Denied", please [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/CodeGen/docker/xeon#validate-microservices) first. A simple example:

```bash
http_proxy=""
curl http://${host_ip}:8028/generate \
-X POST \
-d '{"inputs":"Implement a high-level API for a TODO list application. The API takes as input an operation request and updates the TODO list in place. If the request is invalid, raise an exception.","parameters":{"max_new_tokens":256, "do_sample": true}}' \
-H 'Content-Type: application/json'
```

2. (Docker only) If all microservices work well, please check the port ${host_ip}:7778, the port may be allocated by other users, you can modify the `docker_compose.yaml`.

3. (Docker only) If you get errors like "The container name is in use", please change container name in `docker_compose.yaml`.
11 changes: 11 additions & 0 deletions CodeGen/docker/set_env.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
#!/usr/bin/env bash

# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0


export LLM_MODEL_ID="meta-llama/CodeLlama-7b-hf"
export TGI_LLM_ENDPOINT="http://${host_ip}:8028"
export MEGA_SERVICE_HOST_IP=${host_ip}
export LLM_SERVICE_HOST_IP=${host_ip}
export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:7778/v1/codegen"
91 changes: 89 additions & 2 deletions CodeTrans/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,12 +12,99 @@ This Code Translation use case uses Text Generation Inference on Intel Gaudi2 or

The Code Translation service can be effortlessly deployed on either Intel Gaudi2 or Intel Xeon Scalable Processor.

Currently we support two ways of deploying Code Translation services on docker:

1. Start services using the docker image on `docker hub`:

```bash
docker pull opea/codetrans:latest
```

2. Start services using the docker images `built from source`: [Guide](./docker)

## Setup Environment Variable

To set up environment variables for deploying Code Translation services, follow these steps:

1. Set the required environment variables:

```bash
export host_ip="External_Public_IP"
export no_proxy="Your_No_Proxy"
```

2. If you are in a proxy environment, also set the proxy-related environment variables:

```bash
export http_proxy="Your_HTTP_Proxy"
export https_proxy="Your_HTTPs_Proxy"
export HUGGINGFACEHUB_API_TOKEN="Your_Huggingface_API_Token"
```

3. Set up other environment variables:

```bash
bash ./docker/set_env.sh
```

## Deploy with Docker

- To deploy Code Translation on Gaudi please refer to the [Gaudi Guide](./docker/gaudi/README.md)
### Deploy Code Translation on Gaudi

- If your version of `Habana Driver` < 1.16.0 (check with `hl-smi`), run the following command directly to start Code Translation services. Please find corresponding [docker_compose.yaml](./docker/gaudi/docker_compose.yaml).

```bash
cd GenAIExamples/CodeTrans/docker/gaudi
docker compose -f docker_compose.yaml up -d
```

- If your version of `Habana Driver` >= 1.16.0, refer to the [Gaudi Guide](./docker/gaudi/README.md) to build docker images from source.

- To deploy Code Translation on Xeon please refer to the [Xeon Guide](./docker/xeon/README.md).
### Deploy Code Translation on Xeon

Please find corresponding [docker_compose.yaml](./docker/xeon/docker_compose.yaml).

```bash
cd GenAIExamples/CodeTrans/docker/xeon
docker compose -f docker_compose.yaml up -d
```

Refer to the [Xeon Guide](./docker/xeon/README.md) for more instructions on building docker images from source.

## Deploy with Kubernetes

Please refer to the [Code Translation Kubernetes Guide](./kubernetes/README.md)

# Consume Code Translation Service

Two ways of consuming Code Translation Service:

1. Use cURL command on terminal

```bash
curl http://${host_ip}:7777/v1/codetrans \
-H "Content-Type: application/json" \
-d '{"language_from": "Golang","language_to": "Python","source_code": "package main\n\nimport \"fmt\"\nfunc main() {\n fmt.Println(\"Hello, World!\");\n}"}'
```

2. Access via frontend

To access the frontend, open the following URL in your browser: http://{host_ip}:5173.

By default, the UI runs on port 5173 internally.

# Troubleshooting

1. If you get errors like "Access Denied", please [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/CodeTrans/docker/xeon#validate-microservices) first. A simple example:

```bash
http_proxy=""
curl http://${host_ip}:8008/generate \
-X POST \
-d '{"inputs":" ### System: Please translate the following Golang codes into Python codes. ### Original codes: '\'''\'''\''Golang \npackage main\n\nimport \"fmt\"\nfunc main() {\n fmt.Println(\"Hello, World!\");\n '\'''\'''\'' ### Translated codes:","parameters":{"max_new_tokens":17, "do_sample": true}}' \
-H 'Content-Type: application/json'
```

2. (Docker only) If all microservices work well, please check the port ${host_ip}:7777, the port may be allocated by other users, you can modify the `docker_compose.yaml`.

3. (Docker only) If you get errors like "The container name is in use", please change container name in `docker_compose.yaml`.
11 changes: 11 additions & 0 deletions CodeTrans/docker/set_env.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
#!/usr/bin/env bash

# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0


export LLM_MODEL_ID="HuggingFaceH4/mistral-7b-grok"
export TGI_LLM_ENDPOINT="http://${host_ip}:8008"
export MEGA_SERVICE_HOST_IP=${host_ip}
export LLM_SERVICE_HOST_IP=${host_ip}
export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:7777/v1/codetrans"
Loading

0 comments on commit 755bbf8

Please sign in to comment.