-
-
Notifications
You must be signed in to change notification settings - Fork 188
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"No model selected" error when using "Custom API" #931
Comments
I did try making sure that the OpenAI settings had a model selected first, even though that shouldn't be related |
@davedawkins does the validation warning persist when you hit the "+" (plus) button to create a new chat? Do any errors occur when you try to submit a chat message? Thanks for the screenshots and your help in solving this 🌴 |
However, no evidence in DevTools/Network tab that any connection is attempted |
@davedawkins thanks for the screenshots. The lack of requests in the network tab is probably because it's using Obsidian internals to make the request, and those don't get logged to the console. It might be worth trying the "enable cors" setting (currently toggled off) in the Lm Studio settings. If that doesn't work, I'll probably need to get an instance running myself locally to debug. 🌴 |
@davedawkins I added an LM Studio adapter in the latest version. This way you don't have to configure the custom API endpoint. It also automatically imports available models. Let me know how it works 🌴 |
Wow. Thank you. Can't wait to try it
…On Thu, 19 Dec 2024 at 03:06, WFH Brian ***@***.***> wrote:
@davedawkins <https://github.com/davedawkins> I added an LM Studio
adapter in the latest version. This way you don't have to configure the
custom API endpoint. It also automatically imports available models. Let me
know how it works 🌴
—
Reply to this email directly, view it on GitHub
<#931 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AACFV3IO5SEJTYYESYIGFCD2GIZ2HAVCNFSM6AAAAABT2XZGA2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKNJSGY3TQOJTG4>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
YES !! |
@davedawkins happy to hear that it's working 😊 Which model are you using? I was having trouble triggering the lookup action on the models I was testing (llama-3.2-3b, Qwen-15b). |
Model: meta-llama-3.1-8b-instruct I had to tell it to summarize my test page before it would answer questions about a subject I introduced in that page. Until then it would give me unrelated answers. I thought it was because I wasn't prompting correctly. I'm still learning about LLMs, and my understanding of how embeddings work is minimal. My guess is that the embeddings aren't being "sent" to LM studio. If I'm correct, I am using a local "embedding" model in Smart Connections and I should use a "normal" LLM in LM Studio. See screenshots below My Chat Log Configuration for embeddings Configuration for Smart Chat Configuration for LM Studio |
@davedawkins thanks for the follow-up. The chat should've triggered lookup after the first "based on my notes" message. This is the issue I was referring to before. The lookup tool is being included when sent to LM Studio, with both a parameter (tool_choice), that's supposed to force using the tool, and a system prompt that further requests tool usage (built specific for the adapter based on a similar implementation for Ollama) yet the model still fails to properly call the tool. This probably has to do with the model not natively supporting tool use. I tried finding models in lm studio that explicitly support tool use, but it wasn't very straightforward (I couldn't find any without referencing outside resources). Mentioning notes specifically in messages, like in the second message in the screenshots, should always work because no tool calling is required. If you can find a local model that "natively" (this is the language used in the LM Studio docs) supports tools, then it might be more likely to call the tool as expected. Besides that, there would need to be special logic added for non-tool-calling models. This is something I decided against adding to the time being since tool calling is becoming more ubiquitous 🌴 |
Is it more likely that it's LM Studio that is not supporting tool use? I understand models to be purely data, and that it's the hosting engine responsible for driving prompts through the model (and therefore tool use?) |
LM Studio is supposed to support tools. And it has worked in some instances, but seems sporadic. That's why I think it's a model issue rather than LM Studio. Here's the link to the LM Studio docs section https://lmstudio.ai/docs/advanced/tool-use#supported-models 🌴 |
Obsidian 1.7.7
Smart Connections 2.3.45
MacOS Sequoia
Custom API settings
The text was updated successfully, but these errors were encountered: