You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello and thank you for the excellent application.
I've got a couple questions around the design choices of Smart Connections as I'm at a bit of a crossroads with Obsidian LLM integration, and need to figure out what I'm going to do.
Currently I have a centralized app working as a proxy to OpenAI as well as local models, using the OpenAI API format. I've got integration working with Obsidian in that I can open any app that integrates with OpenAI (actually my proxy) and prompt it, and it will have access to all the tools that I've built, including the ones for formatting markdown and saving a file to the Obsidian vault folder.
I've got a flow built in that picks the most appropriate agent for whatever the task is. Regarding RAG, I've got that taken care of for the most part with a local ChromaDB instance. But I'd like to add some smarts to how the query is done, perhaps using a 4o mini model to format a query based on meta tags and topics or something along those lines.
However, I noticed that Smart Connections handles other things that I had been planning on building out, like modifying existing notes.
I'd love to build Smart Connections into the environment that I've stood up, but in the current model with the locally running app, and then the requirement to tunnel in so OpenAI can access it, I'm not sure it really fits.
This brings me to my questions (sorry for the wall of text)
Have you thought about using the OpenAI Assistant API instead of a custom GPT? This would allow the plugin to reach out to prompt OpenAI, and run whatever tool calls that come back locally. The requirement for the tunnel would go away, as the connection would always be outbound. Of course, you'd lose the ability to use the OpenAI chat interface in that scenario, but for me anyways, that's not really all that interesting. As I understand it, the Assist API also offers some RAG enhancements that would be interesting for the folks who aren't trying to keep everything local. Using the Assist API could also alleviate some security concerns folks might have with logging into the custom gpt.
It'd be great if the vector DB piece was a bit more configurable. For example, I'd like to point to a server running Ollama or something for embedding, and a Chroma instance for bouncing the query off.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello and thank you for the excellent application.
I've got a couple questions around the design choices of Smart Connections as I'm at a bit of a crossroads with Obsidian LLM integration, and need to figure out what I'm going to do.
Currently I have a centralized app working as a proxy to OpenAI as well as local models, using the OpenAI API format. I've got integration working with Obsidian in that I can open any app that integrates with OpenAI (actually my proxy) and prompt it, and it will have access to all the tools that I've built, including the ones for formatting markdown and saving a file to the Obsidian vault folder.
I've got a flow built in that picks the most appropriate agent for whatever the task is. Regarding RAG, I've got that taken care of for the most part with a local ChromaDB instance. But I'd like to add some smarts to how the query is done, perhaps using a 4o mini model to format a query based on meta tags and topics or something along those lines.
However, I noticed that Smart Connections handles other things that I had been planning on building out, like modifying existing notes.
I'd love to build Smart Connections into the environment that I've stood up, but in the current model with the locally running app, and then the requirement to tunnel in so OpenAI can access it, I'm not sure it really fits.
This brings me to my questions (sorry for the wall of text)
Have you thought about using the OpenAI Assistant API instead of a custom GPT? This would allow the plugin to reach out to prompt OpenAI, and run whatever tool calls that come back locally. The requirement for the tunnel would go away, as the connection would always be outbound. Of course, you'd lose the ability to use the OpenAI chat interface in that scenario, but for me anyways, that's not really all that interesting. As I understand it, the Assist API also offers some RAG enhancements that would be interesting for the folks who aren't trying to keep everything local. Using the Assist API could also alleviate some security concerns folks might have with logging into the custom gpt.
It'd be great if the vector DB piece was a bit more configurable. For example, I'd like to point to a server running Ollama or something for embedding, and a Chroma instance for bouncing the query off.
Beta Was this translation helpful? Give feedback.
All reactions