-
Notifications
You must be signed in to change notification settings - Fork 15.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Support for Memcached as a LLM Model Cache #27275
Closed
3 tasks done
Labels
Comments
Thank you for creating an issue for us! We should have a PR created in a week or two. Could this issue be assigned to me? |
@efriis We just created a PR for this integration, would appreciate if you could take a look! |
yanomaly
pushed a commit
to yanomaly/langchain
that referenced
this issue
Nov 8, 2024
## Description This PR adds support for Memcached as a usable LLM model cache by adding the ```MemcachedCache``` implementation relying on the [pymemcache](https://github.com/pinterest/pymemcache) client. Unit test-wise, the new integration is generally covered under existing import testing. All new functionality depends on pymemcache if instantiated and used, so to comply with the other cache implementations the PR also adds optional integration tests for ```MemcachedCache```. Since this is a new integration, documentation is added for Memcached as an integration and as an LLM Cache. ## Issue This PR closes langchain-ai#27275 which was originally raised as a discussion in langchain-ai#27035 ## Dependencies There are no new required dependencies for langchain, but [pymemcache](https://github.com/pinterest/pymemcache) is required to instantiate the new ```MemcachedCache```. ## Example Usage ```python3 from langchain.globals import set_llm_cache from langchain_openai import OpenAI from langchain_community.cache import MemcachedCache from pymemcache.client.base import Client llm = OpenAI(model="gpt-3.5-turbo-instruct", n=2, best_of=2) set_llm_cache(MemcachedCache(Client('localhost'))) # The first time, it is not yet in cache, so it should take longer llm.invoke("Which city is the most crowded city in the USA?") # The second time it is, so it goes faster llm.invoke("Which city is the most crowded city in the USA?") ``` --------- Co-authored-by: Erick Friis <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Discussed in #27035
Originally posted by prokopchukdim October 1, 2024
Checked
Feature request
We would like to add support for Memcached as a usable LLM model cache. There are two main pure-Python memcached client implementations in Python: pymemcache and python-memcached.
We would primarily like to add support for pymemcache given that it is the most actively maintained, but it may be possible to support both clients under one newly added cache class since both are used.
Motivation
Many of the model caches supported natively are full on DBs. While Redis is supported as an option for distributed in-memory storage, many teams and companies rely on Memcached as a distributed in-memory cache. By adding Memcached support, we hope to make the model caching feature more useful to more teams using Langchain.
Example Usage
Proposal (If applicable)
We intend to add a new
MemcachedCache
implementation inlibs/community/langchain_community/cache.py
to support thepymemcache
client.If there is interest in also supporting the
python-memcached
client, or others, we can explore creating a unified implementation class since all clients should generally adhere to the memcached text protocol.We intend to submit a pull request some time in October, and no later than mid-November.
The text was updated successfully, but these errors were encountered: