Skip to content

Commit

Permalink
[Doc] Update CodeGen and Translation READMEs (#847)
Browse files Browse the repository at this point in the history
Signed-off-by: letonghan <[email protected]>
  • Loading branch information
letonghan authored Sep 19, 2024
1 parent f04f061 commit a09395e
Show file tree
Hide file tree
Showing 6 changed files with 66 additions and 13 deletions.
9 changes: 7 additions & 2 deletions CodeGen/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,8 @@ By default, the LLM model is set to a default value as listed below:
[meta-llama/CodeLlama-7b-hf](https://huggingface.co/meta-llama/CodeLlama-7b-hf) is a gated model that requires submitting an access request through Hugging Face. You can replace it with another model.
Change the `LLM_MODEL_ID` below for your needs, such as: [Qwen/CodeQwen1.5-7B-Chat](https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat), [deepseek-ai/deepseek-coder-6.7b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct)

If you choose to use `meta-llama/CodeLlama-7b-hf` as LLM model, you will need to visit [here](https://huggingface.co/meta-llama/CodeLlama-7b-hf), click the `Expand to review and access` button to ask for model access.

### Setup Environment Variable

To set up environment variables for deploying ChatQnA services, follow these steps:
Expand Down Expand Up @@ -136,6 +138,9 @@ Two ways of consuming CodeGen Service:
-H 'Content-Type: application/json'
```

2. (Docker only) If all microservices work well, check the port ${host_ip}:7778, the port may be allocated by other users, you can modify the `compose.yaml`.
2. If you get errors like "aiohttp.client_exceptions.ClientConnectorError: Cannot connect to host xx.xx.xx.xx:8028", check the `tgi service` first. If there is "Cannot access gated repo for url
https://huggingface.co/meta-llama/CodeLlama-7b-hf/resolve/main/config.json." error of `tgi service`, Then you need to ask for model access first. Follow the instruction in the [Required Models](#required-models) section for more information.

3. (Docker only) If all microservices work well, check the port ${host_ip}:7778, the port may be allocated by other users, you can modify the `compose.yaml`.

3. (Docker only) If you get errors like "The container name is in use", change container name in `compose.yaml`.
4. (Docker only) If you get errors like "The container name is in use", change container name in `compose.yaml`.
32 changes: 28 additions & 4 deletions Translation/docker_compose/intel/cpu/xeon/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,24 @@ For detailed information about these instance types, you can refer to this [link

After launching your instance, you can connect to it using SSH (for Linux instances) or Remote Desktop Protocol (RDP) (for Windows instances). From there, you'll have full access to your Xeon server, allowing you to install, configure, and manage your applications as needed.

## 🚀 Build Docker Images
## 🚀 Prepare Docker Images

First of all, you need to build Docker Images locally and install the python package of it.
For Docker Images, you have two options to prepare them.

1. Pull the docker images from docker hub.

- More stable to use.
- Will be automatically downloaded when using docker compose command.

2. Build the docker images from source.

- Contain the latest new features.

- Need to be manually build.

If you choose to pull docker images form docker hub, skip this section and go to [Start Microservices](#start-microservices) part directly.

Follow the instructions below to build the docker images from source.

### 1. Build LLM Image

Expand Down Expand Up @@ -45,15 +60,15 @@ docker build -t opea/translation-ui:latest --build-arg https_proxy=$https_proxy

```bash
cd GenAIComps
docker build -t opea/translation-nginx:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/nginx/Dockerfile .
docker build -t opea/nginx:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/nginx/Dockerfile .
```

Then run the command `docker images`, you will have the following Docker Images:

1. `opea/llm-tgi:latest`
2. `opea/translation:latest`
3. `opea/translation-ui:latest`
4. `opea/translation-nginx:latest`
4. `opea/nginx:latest`

## 🚀 Start Microservices

Expand Down Expand Up @@ -101,6 +116,15 @@ Change the `LLM_MODEL_ID` below for your needs.
docker compose up -d
```

> Note: The docker images will be automatically downloaded from `docker hub`:
```bash
docker pull opea/llm-tgi:latest
docker pull opea/translation:latest
docker pull opea/translation-ui:latest
docker pull opea/nginx:latest
```

### Validate Microservices

1. TGI Service
Expand Down
2 changes: 1 addition & 1 deletion Translation/docker_compose/intel/cpu/xeon/compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ services:
ipc: host
restart: always
translation-xeon-nginx-server:
image: ${REGISTRY:-opea}/translation-nginx:${TAG:-latest}
image: ${REGISTRY:-opea}/nginx:${TAG:-latest}
container_name: translation-xeon-nginx-server
depends_on:
- translation-xeon-backend-server
Expand Down
32 changes: 28 additions & 4 deletions Translation/docker_compose/intel/hpu/gaudi/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,24 @@

This document outlines the deployment process for a Translation application utilizing the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice pipeline on Intel Gaudi server. The steps include Docker image creation, container deployment via Docker Compose, and service execution to integrate microservices such as We will publish the Docker images to Docker Hub, it will simplify the deployment process for this service.

## 🚀 Build Docker Images
## 🚀 Prepare Docker Images

First of all, you need to build Docker Images locally. This step can be ignored after the Docker images published to Docker hub.
For Docker Images, you have two options to prepare them.

1. Pull the docker images from docker hub.

- More stable to use.
- Will be automatically downloaded when using docker compose command.

2. Build the docker images from source.

- Contain the latest new features.

- Need to be manually build.

If you choose to pull docker images form docker hub, skip to [Start Microservices](#start-microservices) part directly.

Follow the instructions below to build the docker images from source.

### 1. Build LLM Image

Expand Down Expand Up @@ -37,15 +52,15 @@ docker build -t opea/translation-ui:latest --build-arg https_proxy=$https_proxy

```bash
cd GenAIComps
docker build -t opea/translation-nginx:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/nginx/Dockerfile .
docker build -t opea/nginx:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/nginx/Dockerfile .
```

Then run the command `docker images`, you will have the following four Docker Images:

1. `opea/llm-tgi:latest`
2. `opea/translation:latest`
3. `opea/translation-ui:latest`
4. `opea/translation-nginx:latest`
4. `opea/nginx:latest`

## 🚀 Start Microservices

Expand Down Expand Up @@ -93,6 +108,15 @@ Change the `LLM_MODEL_ID` below for your needs.
docker compose up -d
```

> Note: The docker images will be automatically downloaded from `docker hub`:
```bash
docker pull opea/llm-tgi:latest
docker pull opea/translation:latest
docker pull opea/translation-ui:latest
docker pull opea/nginx:latest
```

### Validate Microservices

1. TGI Service
Expand Down
2 changes: 1 addition & 1 deletion Translation/docker_compose/intel/hpu/gaudi/compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ services:
ipc: host
restart: always
translation-gaudi-nginx-server:
image: ${REGISTRY:-opea}/translation-nginx:${TAG:-latest}
image: ${REGISTRY:-opea}/nginx:${TAG:-latest}
container_name: translation-gaudi-nginx-server
depends_on:
- translation-gaudi-backend-server
Expand Down
2 changes: 1 addition & 1 deletion Translation/docker_image_build/build.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -28,4 +28,4 @@ services:
context: GenAIComps
dockerfile: comps/nginx/Dockerfile
extends: translation
image: ${REGISTRY:-opea}/translation-nginx:${TAG:-latest}
image: ${REGISTRY:-opea}/nginx:${TAG:-latest}

0 comments on commit a09395e

Please sign in to comment.