Replies: 2 comments
-
I am not clear on port you are running curl command on one port and connecting on different. Which app is on container or both app is in local |
Beta Was this translation helpful? Give feedback.
0 replies
-
@Sennar4 if you're running the kotaemon RAG locally using Docker , you probably need replace http://localhost with http://host.docker.internal to correctly communicate with service on the host machine. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello,
I have connection problems with my local LLMs on the GPU cluster.
My LLM specification looks like this:
The base_url should be correct because I had to choose a different port for Ollama. But I always get the error:
But when I look on the server it finds the models:
Does anyone have any idea what the problem could be?
Beta Was this translation helpful? Give feedback.
All reactions