Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Quickstart Ollama External API - GET http://ollama:11434/api/tags "HTTP/1.1 403 Forbidden" #2066

Open
5 of 9 tasks
reysic opened this issue Aug 22, 2024 · 9 comments
Open
5 of 9 tasks
Assignees
Labels
bug Something isn't working

Comments

@reysic
Copy link

reysic commented Aug 22, 2024

Pre-check

  • I have searched the existing issues and none cover this bug.

Description

Following the Quickstart documentation provided here for Ollama External API on macOS results in a 403 error in the PrivateGPT container when attempting to communicate with Ollama.

I've verified that Ollama is running locally by visiting http://localhost:11434/ and receiving the customary "Ollama is running".

private-gpt git:(main) docker compose --profile ollama-api up

WARN[0000] The "HF_TOKEN" variable is not set. Defaulting to a blank string. 
[+] Running 3/0
 ✔ Network private-gpt_default                 Created                                                                                                                                      0.0s 
 ✔ Container private-gpt-ollama-1              Created                                                                                                                                      0.0s 
 ✔ Container private-gpt-private-gpt-ollama-1  Created                                                                                                                                      0.0s 
Attaching to ollama-1, private-gpt-ollama-1
ollama-1              | time="2024-08-22T16:42:04Z" level=info msg="Configuration loaded from flags."
private-gpt-ollama-1  | 16:42:04.266 [INFO    ] private_gpt.settings.settings_loader - Starting application with profiles=['default', 'docker']
private-gpt-ollama-1  | None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
private-gpt-ollama-1  | 16:42:07.647 [INFO    ] private_gpt.components.llm.llm_component - Initializing the LLM in mode=ollama
private-gpt-ollama-1  | 16:42:07.686 [INFO    ]                     httpx - HTTP Request: GET http://ollama:11434/api/tags "HTTP/1.1 403 Forbidden"
private-gpt-ollama-1  | 16:42:07.686 [ERROR   ]  private_gpt.utils.ollama - Failed to connect to Ollama: 
private-gpt-ollama-1  | Traceback (most recent call last):
private-gpt-ollama-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 798, in get
private-gpt-ollama-1  |     return self._context[key]
private-gpt-ollama-1  |            ~~~~~~~~~~~~~^^^^^
private-gpt-ollama-1  | KeyError: <class 'private_gpt.ui.ui.PrivateGptUi'>
...

Let me know if there's any additional info I can provide that would be helpful, thanks!

Steps to Reproduce

  1. git clone https://github.com/zylon-ai/private-gpt.git
  2. cd private-gpt
  3. OLLAMA_HOST=0.0.0.0 ollama serve
  4. docker-compose --profile ollama-api up

Expected Behavior

Successful access to Ollama locally installed on host from PrivateGPT

Actual Behavior

HTTP 403 error following issuance of docker-compose --profile ollama-api up command followed by container exit

Environment

macOS 14.6.1, Ollama 0.3.6, ollama-api profile

Additional Information

No response

Version

0.6.2

Setup Checklist

  • Confirm that you have followed the installation instructions in the project’s documentation.
  • Check that you are using the latest version of the project.
  • Verify disk space availability for model storage and data processing.
  • Ensure that you have the necessary permissions to run the project.

NVIDIA GPU Setup Checklist

  • Check that the all CUDA dependencies are installed and are compatible with your GPU (refer to CUDA's documentation)
  • Ensure an NVIDIA GPU is installed and recognized by the system (run nvidia-smi to verify).
  • Ensure proper permissions are set for accessing GPU resources.
  • Docker users - Verify that the NVIDIA Container Toolkit is configured correctly (e.g. run sudo docker run --rm --gpus all nvidia/cuda:11.0.3-base-ubuntu20.04 nvidia-smi)
@reysic reysic added the bug Something isn't working label Aug 22, 2024
@mk-relax
Copy link

Same here on windows 10. Temporary fix could be to change ollama:11434 with host.docker.internal:11434 in the config files.

@jaluma
Copy link
Collaborator

jaluma commented Aug 28, 2024

Can you try to disable autopull images in settings.yaml?

@Ycl-Phy
Copy link

Ycl-Phy commented Aug 28, 2024

I am using Ollama 0.3.8 and getting the same issue. I also tried to disable autopull images and no luck.

@MandarUkrulkar
Copy link

same issue here

=/tmp/ollama2036586951/runners
ollama_1 | time="2024-09-02T06:18:22Z" level=info msg="Configuration loaded from flags."
private-gpt-ollama_1 | 06:18:23.067 [INFO ] private_gpt.settings.settings_loader - Starting application with profiles=['default', 'docker']
private-gpt-ollama_1 | None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
private-gpt-ollama_1 | 06:18:30.438 [INFO ] private_gpt.components.llm.llm_component - Initializing the LLM in mode=ollama
private-gpt-ollama_1 | 06:18:30.519 [INFO ] httpx - HTTP Request: GET http://ollama:11434/api/tags "HTTP/1.1 503 Service Unavailable"
private-gpt-ollama_1 | 06:18:30.520 [ERROR ] private_gpt.utils.ollama - Failed to connect to Ollama: Service Unavailable

@AlexMC
Copy link

AlexMC commented Sep 26, 2024

Even when cloning the repo with the fix, I still get the same 403 error. @jaluma, is that to be expected?

@meng-hui
Copy link
Contributor

@MandarUkrulkar check that you changed both PGPT_OLLAMA_API_BASE and PGPT_OLLAMA_EMBEDDING_API_BASE to use http://host.docker.internal:11434

You might also need to run ollama pull nomic-embed-text and ollama pull llama3.2 beforehand because pulling the model from the container seems to timeout.

@jaluma
Copy link
Collaborator

jaluma commented Oct 14, 2024

You have run OLLAMA_HOST=0.0.0.0 ollama serve. By default, Ollama refuses all connections except localhost and returns status code 403. You should not need to modify these environment variables, everything is packed into the docker-compose.

@meng-hui
Copy link
Contributor

@jaluma thanks for the reply. Indeed I did not have OLLAMA_HOST=0.0.0.0 set. That resolves 403.

In this thread there is also a 503, which seems to be because traefik is not ready. I added a simple healthcheck and a depends_on condition and private gpt works.

My docker-compose modifications below

services:
  private-gpt-ollama:
    depends_on:
      ollama:
        condition: service_healthy
  ollama:
    image: traefik:v2.10
    healthcheck:
      test: ["CMD", "sh", "-c", "wget -q --spider http://ollama:11434 || exit 1"]
      interval: 10s
      retries: 3
      start_period: 5s
      timeout: 5s

@jaluma
Copy link
Collaborator

jaluma commented Oct 16, 2024

@meng-hui
Thanks for sharing your modifications!!!!
Can you open a PR with these changes to avoid this error to more users?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

7 participants