-
-
Notifications
You must be signed in to change notification settings - Fork 188
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unlock Local AI Processing in Obsidian (feature request) #302
Comments
I may make a PR for this. I've gotten it to work on my local instance of text-generation-webui. All that needs to be done to change the URL is to open |
Nevermind, someone beat me to it |
In case this is not prioritised here, it may be useful to look at the Khoj/Obsidian plugin, which is opensource and enabling Llama2 |
@nomadphase thanks for sharing that project. I checked it out and it does require a separate desktop application to be installed to use the Obsidian plugin. This is the route I expect will be necessary to utilize local models with Obsidian. While there hasn't been much publicly to see lately in terms of plugin updates, I have been doing a lot in private that will have big implications for this plugin. For example, allowing the Smart Chat to add to and edit notes is just one long weekend away. And during my weekday work, I've been chugging away at something that, when it makes its way into Obsidian, will be unlike anything else I've seen publicly as far as AI tools are concerned. To clarify why I bring this up now, I've been focussing on using GPT-3.5 for that project because I want the result to be compatible with local models. Basically, my hypothesis is that, if I can make it work with GPT-3.5, then the same functionality should work with local models very soon. It's still been tough to find a local model for the embeddings that beats OpenAI's And lastly, thanks everyone (@dragos240 @dicksensei69 ) for your interest in Smart Connections and I'm looking forward to making more updates soon! Now back to it, |
I yearn to be updated on this topic, as I am now playing with Docker for windows to obtain LocalAi, such descriptions as the owner hinted upon above would be a genuine game changer. |
Here are some local LLM related tools that might be of interest:
|
What about using g4f? https://github.com/xtekky/gpt4free |
@wenlzhang @huachuman thanks for the resources! I'm still reviewing options and requirements, but I think we're pretty close to having a local embedding model. The chat models still require an amount of hardware resources that make me pause, but we can do a lot with embeddings alone. And if we were to still use OpenAI for the chat responses while relying on a local embedding model, then that would also significantly reduce the exposure of vaults to OpenAI, as only context used for a specific query would be sent to their servers. 🌴 |
In addition to local LLM support, would you consider a LLM router such as https://withmartian.com/ that boasts faster speeds and reduced costs? I haven't tried this service out yet but if it would be considered I would be happy to investigate further |
any updates on how to connect Ollama ? |
V2.1 will enable configuring API endpoints for the chat model. While I can't say how featureful this option will be compared to what's possible with the OpenAI API, especially considering I intend to add significant capabilities via function calling in v2.1 and I'm not up-to-date on where local models are in that regard, the configuration should allow for integration with local models for individuals capable of setting up the model locally to be accessed via I hope that helps! 🌴 |
I desperately need this feature ^^ I tried editing the main.js openai url and changing them to my local llm with lvstudio but it didn't work. |
LM Studio provides proxy functionality compatible with the OpenAI API. |
@wwjCMP yes, it does. I've already connected it in my development version of Smart Connections. Configurable endpoints/models is just one of the chat features that will be rolling out with 🌴 |
Support for an OpenRouter connection would be huge, as it gives you access to a great amount of models using the same API: https://openrouter.ai/docs#models |
@UBy that looks interesting, thanks for the tip. |
was about to post a Feature Request but did a search first and found your comment @UBy Glad @brianpetro likes it! |
Thanks for the great plugin - I'd like to add to the requests for local LLM usage - if it's to be allowed that we can modify the base_url for the model, can we ensure it will work beyond just localhost? I think a lot of us are hosting our models on servers or gaming desktops atm, and I definitely can't run anything locally on my laptop. Very excited for this! Sending data to a 3rd party like OpenAI is a showstopper for me and most people I know that are dabbling in the LLM space currently. |
Hey @leethobbit , happy to hear you like Smart Connections 😊 Custom local chat modes are already partially available in the The current implementation allows custom configuration over Maybe you can help work out the bugs once Thanks for participating in the Smart Connections community! |
Bravo!!
Nice update and loading it up now.
…________________________________
From: WFH Brian ***@***.***>
Sent: Monday, April 1, 2024 3:17 PM
To: brianpetro/obsidian-smart-connections ***@***.***>
Cc: Allwaysthismoment ***@***.***>; Manual ***@***.***>
Subject: Re: [brianpetro/obsidian-smart-connections] Unlock Local AI Processing in Obsidian (feature request) (Issue #302)
Update: Thanks to the help of an individual who prefers to remain unnamed, I got the Smart Chat working with Ollama. So far, the new settings for the local chat model look like this in the v2.1 early release:
Screenshot.2024-04-01.at.6.10.49.PM.png (view on web)<https://github.com/brianpetro/obsidian-smart-connections/assets/1886014/ec79e555-c042-4c32-9437-75fd0cbdf164>
🌴
—
Reply to this email directly, view it on GitHub<#302 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/APE4YK24Z7RQH4V34YS3OITY3HMF7AVCNFSM6AAAAAA3DM4XIGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMZQGY3DQMZTHA>.
You are receiving this because you are subscribed to this thread.Message ID: ***@***.***>
|
@wwjCMP embedding through Ollama is not yet supported. If this is something you're interested in, please make a feature request here https://github.com/brianpetro/obsidian-smart-connections/issues |
Took me a few tries to get it working with LM Studio so sharing the settings below. Apparently model name is required even if it's not relevant (so I just put some random characters in there). One thing though, even if I start my question with "Based on my notes..." it doesn't look like any context is being sent to the model. Why could that be? I tried 2 different local embedding models, but same thing. |
Oh, I just found out that there are loads of warnings and errors in the console. First of all I guess this is a bug? No need for an API key when using a local backend.
Then it also seems to be struggling to retrieve the embedding and with using a tool. What is Sorry, realised that a lot of this might be off-topic, just trying to debug the issue... Edit: never mind, redoing the embedding once more apparently fixed it, except for it still warning about the API key and continuously trying to connect to Smart Connect and logging |
This comment was marked as off-topic.
This comment was marked as off-topic.
Closing in favor of creating new more-specific issues since the original request, adding local model support, has been added to the latest versions 😊🌴 |
I'm writing to request a feature that would allow users to easily switch between different AI APIs within obsidian-smart-connections. Specifically, I'm interested in being able to toggle between the OpenAI API and emerging alternatives like Oobabooga's textgen and Llamacpp.
These new services offer exciting capabilities like local embeddings and on-device processing that could enhance the Obsidian experience, especially for users who want to avoid sending personal data to third parties. I've found where the API endpoint is configured in the code, and with some tweaking I may be able to switch between them manually. However, having an official option to select different APIs would provide a much smoother experience.
For those wondering, the API endpoint is currently specified in multiple locations the first being on line 1043 of main.js.
url:
https://api.openai.com/v1/embeddings
,line 2666
const url = "https://api.openai.com/v1/chat/completions";
line 2719
url:
https://api.openai.com/v1/chat/completions
,To manually change the API, these endpoints could be modified to point to local services like Oobabooga or Anthropic. However, this involves directly editing the source code which is cumbersome.
Ideally, there could be a function that defaults to OpenAI, but allows the API URL to be easily configured as a setting. Users could then switch to local IPs or services with just a simple configuration change. Furthermore, if this setting was exposed through the GUI, it would enable seamless API swapping without any code editing required.
The open source ecosystem is rapidly evolving, and empowering users to take advantage of these new innovations aligns with Obsidian's ethos of flexibility and customization. Users would love to rely on my own local hardware for AI processing rather than being locked into a single provider.
Thank you for your consideration. Obsidian has been invaluable for my workflow, and I'm excited by its potential to integrate some of these cutting-edge AI capabilities in a privacy-preserving way. Enabling easy API switching would be a major step forward. Please let me know if I can provide any other details!
The text was updated successfully, but these errors were encountered: