From 83340e09d5698939947eaf00866fbb8beac12e5a Mon Sep 17 00:00:00 2001 From: "pre-commit-ci[bot]" <66853113+pre-commit-ci[bot]@users.noreply.github.com> Date: Thu, 29 Aug 2024 20:42:01 +0000 Subject: [PATCH] [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Signed-off-by: Chun Tao --- ChatQnA/README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/ChatQnA/README.md b/ChatQnA/README.md index bf78214da..08067c4d4 100644 --- a/ChatQnA/README.md +++ b/ChatQnA/README.md @@ -97,7 +97,8 @@ flowchart LR This ChatQnA use case performs RAG using LangChain, Redis VectorDB and Text Generation Inference on Intel Gaudi2 or Intel XEON Scalable Processors. The Intel Gaudi2 accelerator supports both training and inference for deep learning models in particular for LLMs. Visit [Habana AI products](https://habana.ai/products) for more details. -For a full list of framework, model, serving, and hardware choices available for each of the microservice components in the ChatQnA architecture, please refer to the below table: +For a full list of framework, model, serving, and hardware choices available for each of the microservice components in the ChatQnA architecture, please refer to the below table: + @@ -213,7 +214,6 @@ For a full list of framework, model, serving, and hardware choices available for
- ## Deploy ChatQnA Service The ChatQnA service can be effortlessly deployed on either Intel Gaudi2 or Intel XEON Scalable Processors.