Getting Started with v2.0 #432
Replies: 8 comments 22 replies
-
Very excited to understand the new local models. Please can you expand how they compare to the model used in version 1? Can I get the same or better results using a local model as I could with the OpenAI API? |
Beta Was this translation helpful? Give feedback.
-
I'll keep messing around with it, seems good so far. I'm really excited for this though: |
Beta Was this translation helpful? Give feedback.
-
I just upgraded to v2. By default, the Blocks embedding model "BGE-micro-v2" was selected. It embedded all the blocks, but Smart Chat wasn't generating any useful content from my notes when I asked it questions. So I've enabled "Jina-v2-small-2k" in Notes Embedding Model. Still, smart chat no longer generates useful content for me, and now every time I open Obsidian, it re-embeds all the smart notes (which takes half an hour). So, two issues here! I'm not an AI expert, I just know Smart Connections was very useful before the upgrade, and now there's a lot more options and the defaults don't work for me :) |
Beta Was this translation helpful? Give feedback.
-
@brianpetro I've upgraded to v.2 and set the plugin to openAI's most powerful model and embedding tool (accuracy is my most crucial consideration). But the plugin (that worked perfectly in version 1.xx doesn't work at all and, worse, freezes the entire vault where it is installed. How can I manage the problem other than disable or uninstall Smart Connections? Any suggestion? |
Beta Was this translation helpful? Give feedback.
-
Absolutely loving the new upgrade! The suggested connections are more accurate, no loss of speed and chat is just as good. Great job Brian! |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Following my previous message, this is what Mixtral 8x7b says: OUTPUTHi there! I hope you're doing well. Thank you for reaching out and considering the use of our Obsidian plugin. To help you make an informed decision on which embedding model to choose, here are some details about each option: BGE-micro-v2 (Local, 512 tokens, 384 dim): This is a lightweight model suitable for smaller datasets and devices with limited resources. It's designed to provide fast inference times at the cost of slightly lower accuracy compared to larger models. BGE-small (Local, 512 tokens, 384 dim): Similar to the micro version but with a higher capacity for handling more complex data and tasks. This model offers better performance than the micro variant while still maintaining reasonable inference times. Jina-v2-small-2K (Local, 2,048 tokens, 512 dim): A larger model that can handle longer sequences of text compared to the previous two options. It's designed for better accuracy at the expense of slightly slower inference times and higher memory usage. Jina-v2-small-4K (Local, 4,096 tokens, 512 dim): This model is even larger than the 2K version, allowing it to process longer sequences of text with improved accuracy. However, this comes at a cost of increased memory usage and slower inference times compared to smaller models. Jina-v2-small-8K (Local, 8,192 tokens, 512 dim): The largest model among the options provided, capable of handling very long sequences of text with high accuracy. This model requires significant memory resources and may have slower inference times compared to smaller models. Considering your M2 with 96 GB RAM, you should be able to handle any of these models without issue. However, if you're working on a large dataset or require the highest possible accuracy, I would recommend using either Jina-v2-small-4K or Jina-v2-small-8K. If inference speed is more important for your use case, BGE-micro-v2 might be a better choice. Is it true? |
Beta Was this translation helpful? Give feedback.
-
Notes Embedding Model and Blocks Embedding Mode is it free to choose? Or do I need additional Settings? For example, I need to download to a certain path to use |
Beta Was this translation helpful? Give feedback.
-
Everyone can now upgrade to the v2.0 early release in the Smart Connections settings.
Helping get Smart Connections ready for a version 2 general release:
Do you have questions about the new version?
Having trouble getting started?
Do you have any feedback on the new embedding models?
Do you have any other feedback that might be helpful?
I want to know 💡
Thanks for your help
🌴
Beta Was this translation helpful? Give feedback.
All reactions