Replies: 2 comments 3 replies
-
@matttrent thanks for bringing this up. There was a recent feature request about this that can be found here #559 Please make sure to comment on that feature request so that I can better prioritize this! Thanks for participating in the Smart Connections community |
Beta Was this translation helpful? Give feedback.
-
Please forgive me if I am missing something or beating a dead horse here...I am relatively new to the LLM/GPT world. Given Obsidian's secure reputation and user base, I would suspect that many users would be interested in running local models vs web-based models (like ChatGPT). Perhaps I have misunderstood something somewhere - I am just wanting to guard my data/notes, and feel much better running a local LLM wherever possible. Thanks for all your hard work! |
Beta Was this translation helpful? Give feedback.
-
I came across the Smart Connections plugin last week and have been enjoying exploring my vault in a new way. I upgraded to v2.1 and was able to connect it to my local Ollama server and get to test local chat with the new Llama 3 model. It's quite impressive.
With the new local server support in v2.1 for chat, would it be possible to use the same local server for generating embeddings? I'd prefer to keep everything local, and the default CPU-based embedding generation is pretty slow and locks up my Obsidian instance for some time.
I'd love to use the nomic-embed-text model for embeddings, and have it run fast via GPU inference in Ollama. It seems like all the parts are implemented to enable this, but aren't connected in code or available in settings. Any chance this could be added as an option?
Beta Was this translation helpful? Give feedback.
All reactions