Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPTCache keep creating new gptcache cache_obj #4830

Closed
1 of 14 tasks
elBarkey opened this issue May 17, 2023 · 0 comments · Fixed by #4827
Closed
1 of 14 tasks

GPTCache keep creating new gptcache cache_obj #4830

elBarkey opened this issue May 17, 2023 · 0 comments · Fixed by #4827

Comments

@elBarkey
Copy link
Contributor

elBarkey commented May 17, 2023

System Info

Langchain Version: 0.0.170
Platform: Linux X86_64
Python: 3.9

Who can help?

@SimFG

No response

Information

  • The official example notebooks/scripts
  • My own modified scripts

Related Components

  • LLMs/Chat Models
  • Embedding Models
  • Prompts / Prompt Templates / Prompt Selectors
  • Output Parsers
  • Document Loaders
  • Vector Stores / Retrievers
  • Memory
  • Agents / Agent Executors
  • Tools / Toolkits
  • Chains
  • Callbacks/Tracing
  • Async

Reproduction

Steps to produce behaviour:

from gptcache import Cache
from gptcache.adapter.api import init_similar_cache
from langchain.cache import GPTCache

# Avoid multiple caches using the same file, causing different llm model caches to affect each other

def init_gptcache(cache_obj: Cache, llm str):
    init_similar_cache(cache_obj=cache_obj, data_dir=f"similar_cache_{llm}")

langchain.llm_cache = GPTCache(init_gptcache)
llm = OpenAI(model_name="text-davinci-002", temperature=0.2)
llm("tell me a joke")
print("cached:", langchain.llm_cache.lookup("tell me a joke", llm_string))
# cached: None

the cache won't hits

Expected behavior

the gptcache should have a hit

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant