-
Notifications
You must be signed in to change notification settings - Fork 716
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vectorization failed 404 "/api/embed" Not Found #315
Comments
Same issue for me.
For reference when I inference my local ollama the endpoint is
|
Thanks a lot for the |
@thomashacker Still facing same issue with the latest docker. Is that bug fixed? |
@rchouhan170590 Are you pulling the Docker Image directly from Docker Hub? |
Description
Configuration URL values given cause issues if trailing forward slashes are present.
See forum discussion for more details.
Installation
Weaviate Deployment
Configuration
Reader: default
Chunker: default
Embedder: Ollama (llama 3.1)
Retriever: default
Generator: Ollama (llama 3.1)
Steps to Reproduce
OLLAMA_URL=http://ollama:11434/
OLLAMA_MODEL=llama3.1:latest
OLLAMA_EMBED_MODEL=llama3.1:latest
docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
docker exec -it ollama ollama run llama3.1
docker compose --env-file <verba_env_path> up -d --build
docker network connect "verba_default" "ollama"
open UI at
http://localhost:8000
choose docker deployment option, go to "import data" tab, and attempt to import a file.check verba container logs
docker logs verba-verba-1
and expect to find:Additional context
I've used Ollama separately run in it's own container rather than having it as part of docker compose, this requires that it is connected to the verba network once it's up and running. This is how I'm able to set the OLLAMA_URL to the ollama domain rather than
host.docker.internal
as stated in the documentation. This just looked cleaner to me and allows me to keep my Ollama instance up and running for other uses on my machine (e.g. intellisense, personal AI assistant, etc), but the exact same applies when using host.docker.internal as well, a trailing slash will cause failure when the frontend attempts to connect to an endpoint.The text was updated successfully, but these errors were encountered: