Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

core[patch]: Fix llm string representation for serializable models #23416

Merged
merged 5 commits into from
Jul 1, 2024

Conversation

eyurtsev
Copy link
Collaborator

@eyurtsev eyurtsev commented Jun 25, 2024

Fix LLM string representation for serializable objects.

Fix for issue: #23257

The llm string of serializable chat models is the serialized representation of the object. LangChain serialization dumps some basic information about non serializable objects including their repr() which includes an object id.

This means that if a chat model has any non serializable fields (e.g., a cache), then any new instantiation of the those fields will change the llm representation of the chat model and cause chat misses.

i.e., re-instantiating a postgres cache would result in cache misses!

Copy link

vercel bot commented Jun 25, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

1 Ignored Deployment
Name Status Preview Comments Updated (UTC)
langchain ⬜️ Ignored (Inspect) Visit Preview Jun 27, 2024 8:26pm

@dosubot dosubot bot added the size:L This PR changes 100-499 lines, ignoring generated files. label Jun 25, 2024
@eyurtsev
Copy link
Collaborator Author

#23257

@dosubot dosubot bot added Ɑ: core Related to langchain-core 🤖:improvement Medium size change to existing code to handle new use-cases labels Jun 25, 2024
@eyurtsev eyurtsev requested a review from baskaryan June 27, 2024 21:23
@dosubot dosubot bot added the lgtm PR looks good. Use to confirm that a PR is ready for merging. label Jul 1, 2024
@eyurtsev eyurtsev merged commit b5aef4c into master Jul 1, 2024
134 checks passed
@eyurtsev eyurtsev deleted the eugene/fix_cache_llm_string branch July 1, 2024 18:06
eyurtsev added a commit that referenced this pull request Jul 3, 2024
… that is used as a key for caching chat models responses (#23842)

This PR should fix the following issue:
#23824
Introduced as part of this PR:
#23416

I am unable to reproduce the issue locally though it's clear that we're
getting a `serialized` object which is not a dictionary somehow.

The test below passes for me prior to the PR as well

```python

def test_cache_with_sqllite() -> None:
    from langchain_community.cache import SQLiteCache

    from langchain_core.globals import set_llm_cache

    cache = SQLiteCache(database_path=".langchain.db")
    set_llm_cache(cache)
    chat_model = FakeListChatModel(responses=["hello", "goodbye"], cache=True)
    assert chat_model.invoke("How are you?").content == "hello"
    assert chat_model.invoke("How are you?").content == "hello"
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Ɑ: core Related to langchain-core 🤖:improvement Medium size change to existing code to handle new use-cases lgtm PR looks good. Use to confirm that a PR is ready for merging. size:L This PR changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants