Replies: 2 comments 2 replies
-
Beta Was this translation helpful? Give feedback.
-
i personally would like if we could just adapt all possible variables, starting where its simple to implement. things like system prompts, number of chat history sent as context, api variables, etc. maybe some more creative things like a simple check box next to the chat messages for inclusion in context. or a pane (list) with notes to select (checkbox) to include in the context. or chat windows instead of one unified smart chat window. that would be powerful with the aforementioned features as settings within the chat window. i think obsidian users are generally more interested in high customizability and most of the users have some sort of technical background. like this you can have some basic standard settings but also have high customizability for everyone who wants to do something more specific. i think this is where llm's as well as obsidian really shine. really doubling down on that might be the right approach. high customizability open up a myriad of use cases. also users can share their setups and workflows here or in the obsidian forums, which would be awesome. i cannot code yet, else I'd be helping. keep up the great work, i think this could easily become the greatest obsidian plugin of all times! 🐐✨ |
Beta Was this translation helpful? Give feedback.
-
I have 3 questions that, atleast for me, arent completely clear:
i guess the search functionality is purely using the embeddings and therefor doesnt cost tokens (beside the initial cost to calculate the embedding) - in what way is making new notes triggering new api calls for embeddings?
Is it automatic, can i specify certain intervals to check - is there even a way currently to see how many of my notes have those embeddings?
about the chat: i guess it is working like this: based on the question there is a search call (locally with the help of embeddings), then from this search result the context gets constructed and this gets send via api to openai.
Is this in general correct? If yes, is there a way to finetune in what way the context get constructed? Like how many tokens get used for context vs answer (this is especially interesting for using gpt4 - the big benefit here is the way bigger token amount / context window - but so far i see no way to control this)
this is more of a feature request i guess, based on my testings it seems that the chat currently is only context aware for a single question - this works surprisingly well, but as it appears to be a chat, my assumption is of course that its also in some way context aware of my previous questions (and the answers). This seems not to be the case right now.
I guess this could be an easy fix (maybe as a toggle) - during the construction of the context, just include parts (or all) of the question/answers from the chat --> this way it would feel more natural to chat while still preserving the whole "search through embeddings first and use this to construct the context/prompt" --> again, here it would be interesting to be able to have more control over what amount of tokens are used for (chat-context / search-context / answer)
Beta Was this translation helpful? Give feedback.
All reactions