q's: chat framework, context window, 16k model, embedding solution #268
-
I have a few quick questions, i apologize, they might be naive in nature, i am still learning;
thank you so much for your time and work! @brianpetro greetings |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 6 replies
-
Hi @brezl8
It depends, you have two options. 1) You can use multiple links using the Obsidian double-bracket notation to specify full notes to use as context for your query. Or 2) you can specify a folder using a forward slash then selecting a folder from the drop-down. This will the use parts of notes in the folder as context in your query.
8K technically, but in practice it can be as low as 6K because of the way the tokens are estimated by this plugin specifically.
Yes, you can use the 16k gpt-turbo-3.5 model.
Smart Connections implements "context aware" content chunking, based on the structure of your notes (currently, that's mainly the headings and file path), which makes the results better than others that use less sophisticated methods for chunking content. Thanks for your interest in Smart Connections, |
Beta Was this translation helpful? Give feedback.
Hi @brezl8
It depends, you have two options. 1) You can use multiple links using the Obsidian double-bracket notation to specify full notes to use as context for your query. Or 2) you can specify a folder using a forward slash then selecting a folder from the drop-down. This will the use parts of notes in the folder as context in your query.
8K technically, but in practice it can be as low as 6K because of the way the tokens are estimated by this plugin specifically.