Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not respecting/using specified remote address #37

Closed
i-am-david-fernandez opened this issue Jan 23, 2025 · 9 comments
Closed

Not respecting/using specified remote address #37

i-am-david-fernandez opened this issue Jan 23, 2025 · 9 comments
Assignees
Labels
bug Something isn't working question Further information is requested

Comments

@i-am-david-fernandez
Copy link

i-am-david-fernandez commented Jan 23, 2025

Describe the bug
parllama does not appear to be respecting/using the specified/configured remote Ollama address.

I am attempting to use parllama with a remote instance of Ollama. The ollama server is available at http://10.0.2.2:11434 and is functioning correctly. I can confirm this, as I have other tools working with it successfully, and a basic curl test also confirms this:

$ curl http://10.0.2.2:11434
Ollama is running%

I have set OLLAMA_URL to this address, and can confirm both via the GUI (under Options, AI Providers, Ollama, Base URL) and by inspection of ~/.parllama/settings.json (both ollama_host and provider_base_urls.Ollama) that this value is propagating through to the application and its configuration. Note that I've also tried using an explicit --ollama-url CLI option with no change in behaviour.

However, every attempt to initiate a chat (even a most basic "Hello") results in failure with the response [Errno 111] Connection refused. Inspecting the resultant chat configuration at ~/.parllama/chats/<something>.json shows that the field llm_config.base_url is set to http://localhost:11434, and not the remote host I have specified.

I have further confirmed that the problem is the attempt to use localhost by separately using socat to create a port forwarding, via socat tcp-listen:11434,reuseaddr,fork tcp:10.0.2.2:11434 (i.e., forward/redirect traffic on localhost:11434 to 10.0.2.2:11434). When doing this, chats work as expected.

To Reproduce
Steps to reproduce the behavior:

  1. Set the OLLAMA_URL to http://<host>:<port> or use `--ollama-url http://:
  2. Run parllama
  3. Start a new chat
  4. See error

Expected behavior
I expect the application to use the specified remote Ollama host.

Screenshots
N/A

Desktop (please complete the following information):

  • OS: uname: Linux <REDACTED> 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux, Ubuntu 22.04.5 LTS
  • Browser: N/A
  • Version: parllama 0.3.11

Additional context
I am running parllama as installed via pipx and using python 3.11. I am also running it via an ssh connection, via tmux. Note that this has also highlighted another problem, in that parllama crashes with clipman.exceptions.UnsupportedError: Clipboard in TTY is unsupported., but I can get around this if I set XDG_SESSION_TYPE=x11 prior).

@i-am-david-fernandez
Copy link
Author

I should have noted that, on some level, the remote URL is being honoured/used, as the models shown under the Local tab are those I have installed/available in my remote Ollama instance. It seems to be just the chat function that isn't using it.

@paulrobello paulrobello self-assigned this Jan 23, 2025
@paulrobello paulrobello added the bug Something isn't working label Jan 23, 2025
@paulrobello
Copy link
Owner

Thank you for the thorough bug report. I will look into this and hopefully have a fix by end of day tomorrow.

@i-am-david-fernandez
Copy link
Author

Thanks for the prompt reply! For what it's worth, I will have a look through the code and try to do some deeper debugging myself.

@i-am-david-fernandez
Copy link
Author

i-am-david-fernandez commented Jan 23, 2025

This may be in part due to https://github.com/paulrobello/par_ai_core/blob/302f51045aaaba3c8e23dd5f0a8bad6ecb3da73c/src/par_ai_core/llm_config.py#L545

That line

self.base_url = self.base_url or provider_base_urls.get(self.provider)

will, I think, prohibit obtaining the base_url from OLLAMA_HOST later on in https://github.com/paulrobello/par_ai_core/blob/302f51045aaaba3c8e23dd5f0a8bad6ecb3da73c/src/par_ai_core/llm_config.py#L276

Commenting out that line in llm_config.py does allow OLLAMA_HOST to be used, though this is different to both OLLAMA_URL and --ollama-url; I'm still not sure how either of those are intended to propagate.

I'm also not at all sure of the wider implication of removing that line, e.g., with regards to other providers or other users/clients of par_ai_core.

@paulrobello
Copy link
Owner

paulrobello commented Jan 23, 2025

Parllama started with being Ollama only, but has gone through quite a bit of evolution to support other providers and configs.
It very well could be different config options conflicting with one another.
I split out the "ai-core" to its own package to make it more re-usable for other projects. I am activly updating it as well, and have release planed for it as well.

Thank you for the debug assist.
I am working in the "next-big-thing" branch on parllama and will apply any fixes for this to that branch. I will at the same time apply any fixes to par-ai-core.

I will for sure have things worked out tomorrow.

@i-am-david-fernandez
Copy link
Author

I have a partial local fix by adding a line here:

model_name=self.provider_model_select.model_name,

            llm_config=LlmConfig(
                provider=LlmProvider(self.provider_model_select.provider_name),
                model_name=self.provider_model_select.model_name,
                base_url=settings.provider_base_urls[self.provider_model_select.provider_name],
                temperature=self.get_temperature(),
            ),

i.e., setting the base_url argument when constructing the LlmProvider, obtaining the value from settings. I say and stress "partial", because I don't know if there are other places where a similar fix may be needed (or perhaps that's not the right place to fix it). Hopefully, though, this can at least short-cut your efforts to find and fix.

Note that, with the above, the issue in par_ai_core is no longer an issue, at least not in this particular use (though the underlying problem I mentioned remains).

@paulrobello
Copy link
Owner

Just released v0.3.13. Let me know if it resolves your issue.

@paulrobello paulrobello added the question Further information is requested label Jan 29, 2025
@paulrobello
Copy link
Owner

v0.3.14 had a pretty critical bug fix and I just released v0.3.15 with new copy button on markdown code blocks in chat.
Hopefully your issue was resolved in v0.3.13 but I would update to v0.3.15 before re-testing

@i-am-david-fernandez
Copy link
Author

Hi @paulrobello , my apologies for the delay in responding. I've had a chance to test the updates, both v0.3.13 and now v0.3.15, and from what I can tell, the issue has been resolved. Parllama is now respecting and using the OLLAMA_URL environment variable.

Thank-you kindly for your work, both in fixing this, and for the project/application as a whole!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants