diff --git a/ChatQnA/README.md b/ChatQnA/README.md index bf78214da..08067c4d4 100644 --- a/ChatQnA/README.md +++ b/ChatQnA/README.md @@ -97,7 +97,8 @@ flowchart LR This ChatQnA use case performs RAG using LangChain, Redis VectorDB and Text Generation Inference on Intel Gaudi2 or Intel XEON Scalable Processors. The Intel Gaudi2 accelerator supports both training and inference for deep learning models in particular for LLMs. Visit [Habana AI products](https://habana.ai/products) for more details. -For a full list of framework, model, serving, and hardware choices available for each of the microservice components in the ChatQnA architecture, please refer to the below table: +For a full list of framework, model, serving, and hardware choices available for each of the microservice components in the ChatQnA architecture, please refer to the below table: + @@ -213,7 +214,6 @@ For a full list of framework, model, serving, and hardware choices available for
- ## Deploy ChatQnA Service The ChatQnA service can be effortlessly deployed on either Intel Gaudi2 or Intel XEON Scalable Processors.