-
Notifications
You must be signed in to change notification settings - Fork 16.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BUG: Many chat models never uses SQLiteCache because of the cache instance's __repr__ method changes! #23257
Comments
Re-opened after reading a bit more carefully. @thiswillbeyourgithub if you're sharing a minimal example in the future, it's much better to share the example itself, and then any utility code to identify more cases. the utility code uses a bunch of functionality (e.g., |
I'm writing a minimal example right now |
Hi @thiswillbeyourgithub, thanks! I confirmed locally already, so we're all set :) from langchain_anthropic import ChatAnthropic
from langchain_core.caches import InMemoryCache
cache = InMemoryCache()
model = ChatAnthropic(cache=InMemoryCache(), model_name='hello')
model._get_llm_string() |
Alright, sorry for the extensive reproduction but at first I showed this on ChatOpenAI then noticed that the issue was so extensive (affecting at least 7 chat models) and is not only related to cache but sometimes to other attributes as well for example for Anthropic as you saw. |
So my original code is a bit awkward but allows to quickly see a lower bound of what attributes of which model is posing problem |
Issue is here:
Likely affected by any other helper objects (e.g., client) |
…23416) Fix LLM string representation for serializable objects. Fix for issue: #23257 The llm string of serializable chat models is the serialized representation of the object. LangChain serialization dumps some basic information about non serializable objects including their repr() which includes an object id. This means that if a chat model has any non serializable fields (e.g., a cache), then any new instantiation of the those fields will change the llm representation of the chat model and cause chat misses. i.e., re-instantiating a postgres cache would result in cache misses!
This is a critical bug, don't you think @eyurtsev ? |
Checked other resources
Example Code
Error Message and Stack Trace (if applicable)
No response
Description
Being affected by this bug in my DocToolsLLM project I ended up, instead of ChatLiteLLM for all models, using directly ChatOpenAI if the model asked is by openai anyway.
The other day I noticed that my SQLiteCcache was getting systematically ignored only by ChatOpenAI and ended up figuring out the culprit :
_get_llm_string()
<langchain_community.cache.SQLiteCache object at SOME_ADRESS>
To help you fix this ASAP I coded an loop that checks over all chat models and tells you what instance is causing the issue.
System Info
python -m langchain_core.sys_info
System Information
Package Information
Packages not installed (Not Necessarily a Problem)
The following packages were not found:
The text was updated successfully, but these errors were encountered: