You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The vllm LLM backend Service is running but the
vllm LLM microservice is failing doing the sanity check. Receiving internal server error. And docker logs show connection timeout
Priority
Undecided
OS type
Ubuntu
Hardware type
Xeon-SPR
Installation method
Deploy method
Running nodes
Single Node
What's the version?
1.0 version
Description
The vllm LLM backend Service is running but the
vllm LLM microservice is failing doing the sanity check. Receiving internal server error. And docker logs show connection timeout
Reproduce steps
followed steps here : https://github.com/opea-project/GenAIExamples/tree/main/ChatQnA/docker_compose/intel/cpu/xeon#1-build-embedding-image
Raw log
No response
The text was updated successfully, but these errors were encountered: