You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for the amazing plugin! I just started using it and it blows my mind.
I have an issue when I am asking questions with Smart Chat. If I understand correctly, the context provided to the LLM is 4 files, at least when using GPT-4o. I have text related to different topics spread across many more files.
Is there a hard limit to provide 4 files at maximum to the LLM? Is it possible to configure the limit?
If not, would it be possible to grow the amount of files that are provided to the LLM? Especially now that the newer models support larger context windows and are more affordable.
The text was updated successfully, but these errors were encountered:
I just reviewed the code and noticed a bug (fixed in brianpetro/jsbrains@128efcc) that was erroneously limiting the number of returned results to be less than the arbitrarily imposed limit of 10.
The limit will be improved to allow the use of more context. I have a significant overhaul in the works for the Smart Chat (no ETA), and this should be one of the improvements made then.
The bugfix was shipped in v2.1.90. Thanks for bringing this to my attention 🌴
Thank you for the amazing plugin! I just started using it and it blows my mind.
I have an issue when I am asking questions with Smart Chat. If I understand correctly, the context provided to the LLM is 4 files, at least when using GPT-4o. I have text related to different topics spread across many more files.
Is there a hard limit to provide 4 files at maximum to the LLM? Is it possible to configure the limit?
If not, would it be possible to grow the amount of files that are provided to the LLM? Especially now that the newer models support larger context windows and are more affordable.
The text was updated successfully, but these errors were encountered: