-
Notifications
You must be signed in to change notification settings - Fork 16.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
community: Allow passing allow_dangerous_deserialization
when loading LLM chain
#18894
Merged
eyurtsev
merged 8 commits into
langchain-ai:master
from
B-Step62:pass-allow-dangerous-deserlization
Mar 26, 2024
Merged
community: Allow passing allow_dangerous_deserialization
when loading LLM chain
#18894
eyurtsev
merged 8 commits into
langchain-ai:master
from
B-Step62:pass-allow-dangerous-deserlization
Mar 26, 2024
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The new flag is installed for preventing unsafe model deserialization that relies on pickle, without user's notice. For some LLMs like Databricks, passing in this flag with True is necessary to instantiate the model. However, loader functions like load_llm doesn't accept this flag so there was no way to load those llms inside chains.
The latest updates on your projects. Learn more about Vercel for Git ↗︎ 1 Ignored Deployment
|
Thank you I think this makes sense. I'll try to review later today |
eyurtsev
approved these changes
Mar 14, 2024
auto-merge was automatically disabled
March 21, 2024 10:34
Head branch was pushed to by a user without write access
@eyurtsev I've fixed the lint failure, would appreciate if you could take another look. Thanks! |
thanks @B-Step62 |
gkorland
pushed a commit
to FalkorDB/langchain
that referenced
this pull request
Mar 30, 2024
…n loading LLM chain (langchain-ai#18894) ### Issue Recently, the new `allow_dangerous_deserialization` flag was introduced for preventing unsafe model deserialization that relies on pickle without user's notice (langchain-ai#18696). Since then some LLMs like Databricks requires passing in this flag with true to instantiate the model. However, this breaks existing functionality to loading such LLMs within a chain using `load_chain` method, because the underlying loader function [load_llm_from_config](https://github.com/langchain-ai/langchain/blob/f96dd57501131840b713ed7c2e86cbf1ddc2761f/libs/langchain/langchain/chains/loading.py#L40) (and load_llm) ignores keyword arguments passed in. ### Solution This PR fixes this issue by propagating the `allow_dangerous_deserialization` argument to the class loader iff the LLM class has that field. --------- Co-authored-by: Eugene Yurtsev <[email protected]> Co-authored-by: Bagatur <[email protected]>
37 tasks
hinthornw
pushed a commit
that referenced
this pull request
Apr 26, 2024
…n loading LLM chain (#18894) ### Issue Recently, the new `allow_dangerous_deserialization` flag was introduced for preventing unsafe model deserialization that relies on pickle without user's notice (#18696). Since then some LLMs like Databricks requires passing in this flag with true to instantiate the model. However, this breaks existing functionality to loading such LLMs within a chain using `load_chain` method, because the underlying loader function [load_llm_from_config](https://github.com/langchain-ai/langchain/blob/f96dd57501131840b713ed7c2e86cbf1ddc2761f/libs/langchain/langchain/chains/loading.py#L40) (and load_llm) ignores keyword arguments passed in. ### Solution This PR fixes this issue by propagating the `allow_dangerous_deserialization` argument to the class loader iff the LLM class has that field. --------- Co-authored-by: Eugene Yurtsev <[email protected]> Co-authored-by: Bagatur <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
🤖:improvement
Medium size change to existing code to handle new use-cases
lgtm
PR looks good. Use to confirm that a PR is ready for merging.
Ɑ: models
Related to LLMs or chat model modules
size:L
This PR changes 100-499 lines, ignoring generated files.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Issue
Recently, the new
allow_dangerous_deserialization
flag was introduced for preventing unsafe model deserialization that relies on pickle without user's notice (#18696). Since then some LLMs like Databricks requires passing in this flag with true to instantiate the model.However, this breaks existing functionality to loading such LLMs within a chain using
load_chain
method, because the underlying loader function load_llm_from_config(and load_llm) ignores keyword arguments passed in.
Solution
This PR fixes this issue by propagating the
allow_dangerous_deserialization
argument to the class loader iff the LLM class has that field.