-
-
Notifications
You must be signed in to change notification settings - Fork 335
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Handle LLMs lacking concurrency support in Chat UI #506
Conversation
Actually, |
be54f8f
to
efd9ffa
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good job. With this PR, sending a new message before receiving a response to previous one doesn't crash the app anymore.
|
||
# check whether the configured LLM can support a request at this time. | ||
if self.uses_llm and BaseChatHandler._requests_count > 0: | ||
lm_provider_klass = self.config_manager.lm_provider |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should variable be named lm_provider_class
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a common convention to prefer klass
over class
, since class
is a reserved keyword in this language.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: it makes sense to use klass
in a contexts where class
is reserved or when trying to differentiate an instance vs type or consistently use klass
everywhere in the codebase. Use of klass
in a single variable where it doesn't serve any function except of stylistic preference makes code less readable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for handling this!
7efb064
to
16e1594
Compare
@meeseeksdev please backport to 1.x |
Owee, I'm MrMeeseeks, Look at me. There seem to be a conflict, please backport manually. Here are approximate instructions:
And apply the correct labels and milestones. Congratulations — you did some good work! Hopefully your backport PR will be tested by the continuous integration and merged soon! Remember to remove the If these instructions are inaccurate, feel free to suggest an improvement. |
It looks like the backport is failing because #490 was not backported. I'll do that now. |
@meeseeksdev please backport to 1.x |
…529) Co-authored-by: david qiu <[email protected]>
* specify no concurrency in GPT4All models * handle LLMs lacking concurrency support in chat * add annotation & default for uses_llm attr * edit reply msg
* specify no concurrency in GPT4All models * handle LLMs lacking concurrency support in chat * add annotation & default for uses_llm attr * edit reply msg
Fixes #481.
@krassowski Can you help verify this fix? I haven't had good success in running GPT4All locally.