diff --git a/README.md b/README.md index eeb6f2c8..ff4361b0 100644 --- a/README.md +++ b/README.md @@ -49,7 +49,7 @@ The following steps are optional. They're only required if you want to run the w Follow [GMC README](https://github.com/opea-project/GenAIInfra/blob/main/microservices-connector/README.md) to install GMC into your kubernetes cluster. [GenAIExamples](https://github.com/opea-project/GenAIExamples) contains several sample GenAI example use case pipelines such as ChatQnA, DocSum, etc. -Once you have deployed GMC in your Kubernetes cluster, you can deploy any of the example pipelines by following its Readme file (e.g. [Docsum](https://github.com/opea-project/GenAIExamples/blob/main/DocSum/kubernetes/README.md)). +Once you have deployed GMC in your Kubernetes cluster, you can deploy any of the example pipelines by following its Readme file (e.g. [Docsum](https://github.com/opea-project/GenAIExamples/blob/main/DocSum/kubernetes/intel/README_gmc.md)). ### Use helm charts to deploy diff --git a/helm-charts/common/tei/README.md b/helm-charts/common/tei/README.md index 9d9817ea..484b7cd8 100644 --- a/helm-charts/common/tei/README.md +++ b/helm-charts/common/tei/README.md @@ -41,4 +41,4 @@ curl http://localhost:2081/embed -X POST -d '{"inputs":"What is Deep Learning?"} | global.modelUseHostPath | string | `"/mnt/opea-models"` | Cached models directory, tei will not download if the model is cached here. The host path "modelUseHostPath" will be mounted to container as /data directory. Set this to null/empty will force it to download model. | | image.repository | string | `"ghcr.io/huggingface/text-embeddings-inference"` | | | image.tag | string | `"cpu-1.5"` | | -| horizontalPodAutoscaler.enabled | bool | false | Enable HPA autoscaling for the service deployment based on metrics it provides. See HPA section in ../../README.md before enabling! | +| horizontalPodAutoscaler.enabled | bool | false | Enable HPA autoscaling for the service deployment based on metrics it provides. See [HPA section](../../HPA.md) before enabling! | diff --git a/helm-charts/common/teirerank/README.md b/helm-charts/common/teirerank/README.md index d445364f..68f799c3 100644 --- a/helm-charts/common/teirerank/README.md +++ b/helm-charts/common/teirerank/README.md @@ -44,4 +44,4 @@ curl http://localhost:2082/rerank \ | global.modelUseHostPath | string | `"/mnt/opea-models"` | Cached models directory, teirerank will not download if the model is cached here. The host path "modelUseHostPath" will be mounted to container as /data directory. Set this to null/empty will force it to download model. | | image.repository | string | `"ghcr.io/huggingface/text-embeddings-inference"` | | | image.tag | string | `"cpu-1.5"` | | -| horizontalPodAutoscaler.enabled | bool | false | Enable HPA autoscaling for the service deployment based on metrics it provides. See HPA section in ../../README.md before enabling! | +| horizontalPodAutoscaler.enabled | bool | false | Enable HPA autoscaling for the service deployment based on metrics it provides. See [HPA section](../../HPA.md) before enabling! | diff --git a/helm-charts/common/tgi/README.md b/helm-charts/common/tgi/README.md index 0100378f..dd2507ea 100644 --- a/helm-charts/common/tgi/README.md +++ b/helm-charts/common/tgi/README.md @@ -48,4 +48,4 @@ curl http://localhost:2080/generate \ | global.modelUseHostPath | string | `"/mnt/opea-models"` | Cached models directory, tgi will not download if the model is cached here. The host path "modelUseHostPath" will be mounted to container as /data directory. Set this to null/empty will force it to download model. | | image.repository | string | `"ghcr.io/huggingface/text-generation-inference"` | | | image.tag | string | `"1.4"` | | -| horizontalPodAutoscaler.enabled | bool | false | Enable HPA autoscaling for the service deployment based on metrics it provides. See HPA section in ../../README.md before enabling! | +| horizontalPodAutoscaler.enabled | bool | false | Enable HPA autoscaling for the service deployment based on metrics it provides. See [HPA section](../../HPA.md) before enabling! | diff --git a/helm-charts/common/vllm/README.md b/helm-charts/common/vllm/README.md index 28bff970..0235a744 100644 --- a/helm-charts/common/vllm/README.md +++ b/helm-charts/common/vllm/README.md @@ -2,7 +2,7 @@ Helm chart for deploying vLLM Inference service. -Refer to [Deploy with Helm Charts](../README.md) for global guides. +Refer to [Deploy with Helm Charts](../../README.md) for global guides. ## Installing the Chart diff --git a/microservices-connector/config/samples/ChatQnA/use_cases.md b/microservices-connector/config/samples/ChatQnA/use_cases.md index 7e88152b..504d2f3e 100644 --- a/microservices-connector/config/samples/ChatQnA/use_cases.md +++ b/microservices-connector/config/samples/ChatQnA/use_cases.md @@ -28,9 +28,6 @@ For Gaudi: - tei-embedding-service: opea/tei-gaudi:latest - tgi-service: ghcr.io/huggingface/tgi-gaudi:1.2.1 -> [NOTE] -> Refer to [Xeon README](https://github.com/opea-project/GenAIExamples/blob/main/ChatQnA/docker/xeon/README.md) or [Gaudi README](https://github.com/opea-project/GenAIExamples/blob/main/ChatQnA/docker/gaudi/README.md) to build the OPEA images. These too will be available on Docker Hub soon to simplify use. - ## Deploy ChatQnA pipeline There are 3 use cases for ChatQnA example: