Support for local models via ollama #89
Replies: 4 comments
-
I'll add. When composing the endpoint string from the settings, it would be good to bring it to the proper form, taking into account double slash characters, so as not to display “Request URI: http://localhost:11434//v1/chat/completions” in the output logs. It's strange (something I don't understand) but the "add summary" command works, but the "explain" command doesn't work. AddSummary
Explain:
|
Beta Was this translation helpful? Give feedback.
-
Hi @ViktorAgafonov, First of all, it's worth mentioning that I've never tested the integration of the extension with LLama myself, but I know it works because I've already received positive feedback about it. However, the idea is simple: you just need to have an LLM implementation, whatever it may be, whether local or not, but that is compatible with the OpenAI specification, and it will probably work. To make it work, just do what you already did, override the URL and the model. "It's strange (something I don't understand) but the "add summary" command works, but the "explain" command doesn't work.": It’s probably related to LLama and its implementation, but I can’t help much since, like I said, I’ve never tested LLama myself. However, you can try changing the command. If you look at the log, you’ll see that for this command, it’s just sending "Explain," and maybe LLama didn’t quite understand what needs to be done. You could try changing that command in the extension options to something more detailed, like "Explain this code" or something like that, and test different commands until you find one that works well. |
Beta Was this translation helpful? Give feedback.
-
Maybe this will help somehow. I don't know what to do with this: |
Beta Was this translation helpful? Give feedback.
-
The project you mentioned is a library designed to make it easier to integrate .NET applications with the Ollama API. Instead of having to create methods like HTTPClient and entities, etc., on your own, you just add this library to your development application, and you'll have everything ready to go. My extension also uses a similar library, but it's specific to the OpenAI and Azure OpenAI APIs. However, since the Llama API is compatible with the OpenAI API, this library can also be used with Llama, as well as other APIs that are compatible with OpenAI. |
Beta Was this translation helpful? Give feedback.
-
Add (and add documents) how the add-on works with models installed locally via the ollama endpoint (default http://localhost:11434/v1) by overriding the model (does it overlap with adding other models?).
Beta Was this translation helpful? Give feedback.
All reactions