Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Chat UI not reacting to failed response with code 400 #955

Open
rruusu opened this issue Jan 9, 2025 · 12 comments
Open

Chat UI not reacting to failed response with code 400 #955

rruusu opened this issue Jan 9, 2025 · 12 comments

Comments

@rruusu
Copy link

rruusu commented Jan 9, 2025

I sent the following message to Phi4 in Ollama (local):

Can you summarize this document: https://ai.plainenglish.io/vanishing-gradient-problem-in-rnns-d362235005c

The UI just keeps rotating the spinner, as if it is still waiting for a response.

Ollama server log shows as response 400 in just 20 ms:

[GIN] 2025/01/09 - 13:36:49 | 400 |     20.1039ms |       127.0.0.1 | POST     "/api/chat"

Looking at the transaction using the inspector shows logs.

Request:

plugin:smart-connections:13392 {
  "url": "http://localhost:11434/api/chat",
  "method": "POST",
  "body": "{\"model\":\"phi4:latest\",\"messages\":[{\"role\":\"user\",\"content\":\"Can you summarize this document: https://ai.plainenglish.io/vanishing-gradient-problem-in-rnns-d362235005c\\n\\nUse the \\\"lookup\\\" tool.\"}],\"options\":{\"temperature\":0.3,\"top_p\":1},\"stream\":false,\"tools\":[{\"type\":\"function\",\"function\":{\"name\":\"lookup\",\"description\":\"Performs a semantic search of the user's data to surface relevant content.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"hypotheticals\":{\"type\":\"object\",\"description\":\"Predicted relevant notes in markdown format. Provide at least three.\",\"properties\":{\"1\":{\"type\":\"string\"},\"2\":{\"type\":\"string\"},\"3\":{\"type\":\"string\"}},\"required\":[\"1\",\"2\",\"3\"]}},\"required\":[\"hypotheticals\"]}}}],\"format\":\"json\"}"
}

Response:

{
    "status": 400,
    "headers": {
        "content-length": "73",
        "content-type": "application/json; charset=utf-8",
        "date": "Thu, 09 Jan 2025 11:47:30 GMT"
    },
    "arrayBuffer": {},
    "json": {
        "error": "registry.ollama.ai/library/phi4:latest does not support tools"
    },
    "text": "{\"error\":\"registry.ollama.ai/library/phi4:latest does not support tools\"}"
}

Exception:

plugin:smart-connections:13394 Error: Obsidian request failed
    at SmartHttpObsidianRequestAdapter3.request (plugin:smart-connections:13388:42)
    at async SmartHttpRequest3.request (plugin:smart-connections:13348:12)
    at async SmartChatModelOllamaAdapter.complete (plugin:smart-connections:11148:23)
    at async SmartChatModel.invoke_adapter_method (plugin:smart-connections:10305:12)
    at async SmartChatModel.complete (plugin:smart-connections:10446:12)
    at async ScThread.complete (plugin:smart-connections:14372:24)
    at async SmartMessage.init (plugin:smart-connections:15227:7)

Exception:

Uncaught (in promise) TypeError: Cannot read properties of null (reading 'error')
    at ScThread.complete (plugin:smart-connections:14373:20)
    at async SmartMessage.init (plugin:smart-connections:15227:7)

Version: 2.3.47

@rruusu
Copy link
Author

rruusu commented Jan 9, 2025

I think that it is the latter exception that prevents the UI from reacting to the illegal request response. It seems that for some reason the response variable from this.chat_model.complete(request2) is null.

@Pyrolyzed
Copy link

I'm experiencing this issue as well, using llama3 on my local machine. I see the words expecting lookup and it loading, but no result is ever received of course. The following was the prompt:

Based on the notes in /3 - Permanent Notes/ , what is my Operating System?

Was just a question to test the AI haha.

@goblinoats
Copy link

Can confirm this is happening to me as well, I'm using ollama with the default built-in adapter.

(running v2.3.49)

@evetsagg
Copy link

I'm also getting 400, but different error:

json: cannot unmarshal array into Go struct field ChatRequest.messages of type string

Using remote Ollama server using custom api mode:
path /api/chat
streaming off

The ollama server works from other chat frontends.

v2.3.49

@brianpetro
Copy link
Owner

Thanks everyone for reporting this. It seems like the issues with Ollama are likely model specific. In #955 (comment), the issue appears to be the model doesn't support tools, which are currently required to use the lookup notes function from within the chat. Researching which models support tools should fix this error.

For additional errors, please let me know which model you are using with Ollama. Screenshots of error codes/messages are helpful.

Lastly, if any models seem to be working particularly well, sharing that would also be helpful 🌴

@evetsagg
Copy link

evetsagg commented Jan 27, 2025

Thanks everyone for reporting this. It seems like the issues with Ollama are likely model specific. In #955 (comment), the issue appears to be the model doesn't support tools, which are currently required to use the lookup notes function from within the chat. Researching which models support tools should fix this error.

For additional errors, please let me know which model you are using with Ollama. Screenshots of error codes/messages are helpful.

Lastly, if any models seem to be working particularly well, sharing that would also be helpful 🌴

For the error I got: json: cannot unmarshal array into Go struct field ChatRequest.messages of type string

, I copied and pasted the exact same request in Postman and received the same error response. On a closer note, it seems the content block in the Post request is not compatible with the Ollama format (using deepseek r1 14b). The content block for Ollama only supports "content": "some message" instead of { "type":"text", "text", "some message"} (I have verified this in Postman).

The question is for Ollama users serving from a remote computer. What should we use to connect? The current Ollama preset doesn't support remote it seems.

@brianpetro
Copy link
Owner

@evetsagg good to know.

The custom API endpoint does indeed make a different request than the Ollama adapter (uses the default OpenAI adapter which allows the content objects array).

We might be able to add a server url setting to the Ollama adapter to override the default localhost. It sounds like that is probably the only thing you need, does that sound right to you?

🌴

@evetsagg
Copy link

@evetsagg good to know.

The custom API endpoint does indeed make a different request than the Ollama adapter (uses the default OpenAI adapter which allows the content objects array).

We might be able to add a server url setting to the Ollama adapter to override the default localhost. It sounds like that is probably the only thing you need, does that sound right to you?

🌴

Yea, that'll work. Thanks.

@ujikol
Copy link

ujikol commented Jan 29, 2025

I have the issue with running deepseek-r1:1.5b with ollama.

@brianpetro
Copy link
Owner

@ujikol are you using the Ollama adapter (or Custom API adapter) and are there any errors in the console logs that can be screenshotted? 🌴

@ujikol
Copy link

ujikol commented Jan 30, 2025

Ollama (local)

Image

@ujikol
Copy link

ujikol commented Jan 30, 2025

Custom API does not work either.

Image

Image

Manual request works:
curl -X POST -d "{"model": "llama3.1:latest", "messages": [{ "role": "user", "content": "Are you a robot?" }], "stream": false}" http://localhost:11434/api/chat

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants