V2 #399
Replies: 7 comments 10 replies
-
I also rolled back, but because the performance (speed and result quality) is poor with V2 for me. |
Beta Was this translation helpful? Give feedback.
-
I just added a disclaimer to clarify how early of a release it is. I honestly didn't even think anyone would notice when I slipped the "get v2" button into the settings yesterday. I tend to err on the side of shipping too early, and this time error I sure did. And I might've gotten a little overzealous with the local embedding model. Unfortunately, embedding a vault's worth of notes with the local model is really slow. I'm already in the process of re-adding OpenAI embeddings, which weren't meant to be removed permanently. I just had too high expectations for the local model and bad timing on when to make the new version available. Both models should have been there from the start, soorry! As for the quality of retrieval using the local models, I'm implementing huggingface's transformer.js, so that includes many local embedding models that will be available for trying out, and individually managing the trade-off of size vs performance. Performance, from embedding process to retrieval, is also very much a focus point of this update. While version one was acceptable performance-wise, it can be a lot better, and that will also make room for new (and to me, exciting!) features. There's a ton I'm excited about bringing to life in SCv2, and the first step was essentially a page-one rewrite based on my experiences from the past year. While at first, this won't look like much from the outside, basically just the same features with fewer lines of code, over time, it will allow me to make many more improvements and significant feature additions. So I'm very sorry that the premature release started on a bad foot, performance-wise especially! In short, yes, v2beta is currently sub-par, and that should have been made clearer. My apologies for the inconvenience. 🌴 |
Beta Was this translation helpful? Give feedback.
-
UPDATE The latest Edit: the Smart Chat is broken in this latest release. Still looking into it. |
Beta Was this translation helpful? Give feedback.
-
UPDATE: I've been working in Obsidian all day using Smart Connections v2 and the local model. I think it's safe to say that version Initially, using the local embedding model takes a while longer to generate the embeddings. However, after the initial embedding process, the performance seems much smoother. In addition to the local embedding model:
🌴 |
Beta Was this translation helpful? Give feedback.
-
Love the changes but had to roll back due to performance issues not only with V2 but my entire vault...looking forward to a more stable version soon. BTW, by far the only Obsidian GenAI tool I have come across that is actually usable. Appreciate all your work on this amazing product. |
Beta Was this translation helpful? Give feedback.
-
What are the plans for v2? Will it be open to anyone at some point or still be only for paying insiders? |
Beta Was this translation helpful? Give feedback.
-
Moved discussion to #432 |
Beta Was this translation helpful? Give feedback.
-
Brian,
I updated to V2 but my experience caused me to roll back.
Is it intended V2 functionality that the embeddings are not now sent to OpenAI? Instead it appears that the V2 smart connections plugin only sends the top 20 to 30 notes chosen from local embeddings file. I was checking my OpenAI usage stats and no embeddings were uploaded until i rolled back to the previous version. Good thing my vault is fairly small because i have had to restart the embeddings several times to get the vault sorted back to its last good state. Using V2 yielded significantly impeded query performance on my vault compared to the V1.6.48 derivative. Loss of ability to check OpenAI API connection is also a backward step. For my use case I would prefer finer grained controls over the OpenAI API calls with the ability to set temperature and frequency penalties. I also had problems with the notifications in the top right corner - they would hang showing files still outstanding for embedding but there is no way to see what is causing it. Seemed to always be the remainder of the files once the last batch of 10 is reported complete.
On a positive note - showing the text block count not the note count is a welcome update
Happy to give you more feedback - love the plug-in but, selfishly, this update is not for me as it stands
Marc
Beta Was this translation helpful? Give feedback.
All reactions