Replies: 6 comments 7 replies
-
I meant to add: is there something else I should include from a configuration/settings/debug file that would help debug why the chat is only partially working and the context is not used? |
Beta Was this translation helpful? Give feedback.
-
For new content, I've got to prime the model by ensuring Smart Connection Files loads. If it doesn't, then the index isn't up to date. If you open your key source note, and then click the Files it will start embedding. Once that exists, then direct reference is useful to dial in context of response. |
Beta Was this translation helpful? Give feedback.
-
Thanks @jwhco — I think I'm following what you are saying but I still can't get context to be used in the chat. Here's what I tried:
|
Beta Was this translation helpful? Give feedback.
-
@jwhco or @dmatt did either of you figure out how to fix this? I'm running a local implementation of Gemma2 with settings that appear to be correct, based on getting actual responses: I'm getting the same issue as described though: However if I change from local to Open Router Gemma2:9b and just copy/paste the prompt again, it works perfectly fine. Given this isn't an active issue for many others, I assume I've set up my local model wrong. I did notice this in the console, but I can't find where it would be looking for an API key: Do you have any advice? |
Beta Was this translation helpful? Give feedback.
-
The prompt we are having trouble with asks about recent notes in general, while your prompt has a specific note reference that asks for a summary. You're right to suspect an error with a local model setup. I would try the same prompt with a local model and OpenRouter access. Choose the most straightforward configuration while checking the local modern outside of Smart Connections. API key issues are odd because you got a summary in the chat response. Try the one of the prompts above before you change anything. They API key could be something else. |
Beta Was this translation helpful? Give feedback.
-
Is there any way to see what API requests are going out? If I open Developer Console in Obsidian (Ctrl-Shift-I) and switch to network tab, it doesn't show any API requests. I want to see what's being passed in the context and the actual prompt. It struggles to answer questions correctly even if it shows the right note in the context (BTW, would be great if we could see the full list of blocks/sources that make it to the context). If I load the same notes into my Open-WebUI knowledgebase and ask the same model the same question, it gives me correct results. It uses different embedding model though. I love the idea, but as of now, I can't make it work properly, at least with the local LLM. |
Beta Was this translation helpful? Give feedback.
-
I can't seem to get the chat to work. It seems to pull in some notes as reference but then does not use them and claims to not be able to answer my question.
I've tried general prompts which should be answerable like:
I'm using custom local model
llama3:70b-instruct-q4_0
installed with ollama, running the plugin version 2.1.86.Here you can see it pulled in some notes for "context" but then did not use them. Why?
Beta Was this translation helpful? Give feedback.
All reactions