diff --git a/ChatQnA/README.md b/ChatQnA/README.md index 0b83dd770..43ef265a5 100644 --- a/ChatQnA/README.md +++ b/ChatQnA/README.md @@ -97,17 +97,18 @@ flowchart LR This ChatQnA use case performs RAG using LangChain, Redis VectorDB and Text Generation Inference on Intel Gaudi2 or Intel XEON Scalable Processors. The Intel Gaudi2 accelerator supports both training and inference for deep learning models in particular for LLMs. Visit [Habana AI products](https://habana.ai/products) for more details. -In the below, we provide a table that describes for each microservice component in the ChatQnA architecture, the default configuration of the open source project, hardware, port, and endpoint. +In the below, we provide a table that describes for each microservice component in the ChatQnA architecture, the default configuration of the open source project, hardware, port, and endpoint. +
Gaudi default compose.yaml -| MicroService | Open Source Project | HW | Port | Endpoint | -|--------------|---------------------|------|------|---------------------| -| Embedding | Langchain | Gaudi| 6000 | /v1/embaddings | -| Retriever | Langchain | Xeon | 7000 | /v1/retrieval | -| Reranking | Langchain | Xeon | 8000 | /v1/reranking | -| LLM | Langchain | Gaudi| 9000 | /v1/chat/completions | -| Dataprep | Redis | Xeon | 6007 | /v1/dataprep | +| MicroService | Open Source Project | HW | Port | Endpoint | +| ------------ | ------------------- | ----- | ---- | -------------------- | +| Embedding | Langchain | Gaudi | 6000 | /v1/embaddings | +| Retriever | Langchain | Xeon | 7000 | /v1/retrieval | +| Reranking | Langchain | Xeon | 8000 | /v1/reranking | +| LLM | Langchain | Gaudi | 9000 | /v1/chat/completions | +| Dataprep | Redis | Xeon | 6007 | /v1/dataprep |