Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

community: Allow passing allow_dangerous_deserialization when loading LLM chain #18894

Merged

Conversation

B-Step62
Copy link
Contributor

Issue

Recently, the new allow_dangerous_deserialization flag was introduced for preventing unsafe model deserialization that relies on pickle without user's notice (#18696). Since then some LLMs like Databricks requires passing in this flag with true to instantiate the model.

However, this breaks existing functionality to loading such LLMs within a chain using load_chain method, because the underlying loader function load_llm_from_config
(and load_llm) ignores keyword arguments passed in.

Solution

This PR fixes this issue by propagating the allow_dangerous_deserialization argument to the class loader iff the LLM class has that field.

The new flag is installed for preventing unsafe model deserialization
that relies on pickle, without user's notice. For some LLMs like Databricks,
passing in this flag with True is necessary to instantiate the model.
However, loader functions like load_llm doesn't accept this flag so there
was no way to load those llms inside chains.
@dosubot dosubot bot added the size:L This PR changes 100-499 lines, ignoring generated files. label Mar 11, 2024
Copy link

vercel bot commented Mar 11, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

1 Ignored Deployment
Name Status Preview Comments Updated (UTC)
langchain ⬜️ Ignored (Inspect) Visit Preview Mar 26, 2024 1:59am

@dosubot dosubot bot added Ɑ: models Related to LLMs or chat model modules 🤖:improvement Medium size change to existing code to handle new use-cases labels Mar 11, 2024
@eyurtsev eyurtsev self-assigned this Mar 11, 2024
@eyurtsev
Copy link
Collaborator

Thank you I think this makes sense. I'll try to review later today

@eyurtsev eyurtsev self-requested a review March 14, 2024 01:41
@dosubot dosubot bot added the lgtm PR looks good. Use to confirm that a PR is ready for merging. label Mar 14, 2024
@eyurtsev eyurtsev enabled auto-merge (squash) March 14, 2024 01:59
@eyurtsev eyurtsev disabled auto-merge March 19, 2024 13:41
@eyurtsev eyurtsev enabled auto-merge (squash) March 20, 2024 13:55
@eyurtsev eyurtsev disabled auto-merge March 20, 2024 13:55
@eyurtsev eyurtsev enabled auto-merge (squash) March 20, 2024 13:55
auto-merge was automatically disabled March 21, 2024 10:34

Head branch was pushed to by a user without write access

@B-Step62
Copy link
Contributor Author

@eyurtsev I've fixed the lint failure, would appreciate if you could take another look. Thanks!

@eyurtsev eyurtsev merged commit cfecbda into langchain-ai:master Mar 26, 2024
59 checks passed
@eyurtsev
Copy link
Collaborator

thanks @B-Step62

gkorland pushed a commit to FalkorDB/langchain that referenced this pull request Mar 30, 2024
…n loading LLM chain (langchain-ai#18894)

### Issue
Recently, the new `allow_dangerous_deserialization` flag was introduced
for preventing unsafe model deserialization that relies on pickle
without user's notice (langchain-ai#18696). Since then some LLMs like Databricks
requires passing in this flag with true to instantiate the model.

However, this breaks existing functionality to loading such LLMs within
a chain using `load_chain` method, because the underlying loader
function
[load_llm_from_config](https://github.com/langchain-ai/langchain/blob/f96dd57501131840b713ed7c2e86cbf1ddc2761f/libs/langchain/langchain/chains/loading.py#L40)
 (and load_llm) ignores keyword arguments passed in. 

### Solution
This PR fixes this issue by propagating the
`allow_dangerous_deserialization` argument to the class loader iff the
LLM class has that field.

---------

Co-authored-by: Eugene Yurtsev <[email protected]>
Co-authored-by: Bagatur <[email protected]>
hinthornw pushed a commit that referenced this pull request Apr 26, 2024
…n loading LLM chain (#18894)

### Issue
Recently, the new `allow_dangerous_deserialization` flag was introduced
for preventing unsafe model deserialization that relies on pickle
without user's notice (#18696). Since then some LLMs like Databricks
requires passing in this flag with true to instantiate the model.

However, this breaks existing functionality to loading such LLMs within
a chain using `load_chain` method, because the underlying loader
function
[load_llm_from_config](https://github.com/langchain-ai/langchain/blob/f96dd57501131840b713ed7c2e86cbf1ddc2761f/libs/langchain/langchain/chains/loading.py#L40)
 (and load_llm) ignores keyword arguments passed in. 

### Solution
This PR fixes this issue by propagating the
`allow_dangerous_deserialization` argument to the class loader iff the
LLM class has that field.

---------

Co-authored-by: Eugene Yurtsev <[email protected]>
Co-authored-by: Bagatur <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:improvement Medium size change to existing code to handle new use-cases lgtm PR looks good. Use to confirm that a PR is ready for merging. Ɑ: models Related to LLMs or chat model modules size:L This PR changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants