Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

community: Memcached LLM Cache Integration #27323

Merged
merged 10 commits into from
Nov 7, 2024

Conversation

prokopchukdim
Copy link
Contributor

@prokopchukdim prokopchukdim commented Oct 13, 2024

Description

This PR adds support for Memcached as a usable LLM model cache by adding the MemcachedCache implementation relying on the pymemcache client.

Unit test-wise, the new integration is generally covered under existing import testing. All new functionality depends on pymemcache if instantiated and used, so to comply with the other cache implementations the PR also adds optional integration tests for MemcachedCache.

Since this is a new integration, documentation is added for Memcached as an integration and as an LLM Cache.

Issue

This PR closes #27275 which was originally raised as a discussion in #27035

Dependencies

There are no new required dependencies for langchain, but pymemcache is required to instantiate the new MemcachedCache.

Example Usage

from langchain.globals import set_llm_cache
from langchain_openai import OpenAI

from langchain_community.cache import MemcachedCache
from pymemcache.client.base import Client

llm = OpenAI(model="gpt-3.5-turbo-instruct", n=2, best_of=2)
set_llm_cache(MemcachedCache(Client('localhost')))

# The first time, it is not yet in cache, so it should take longer
llm.invoke("Which city is the most crowded city in the USA?")

# The second time it is, so it goes faster
llm.invoke("Which city is the most crowded city in the USA?")

Copy link

vercel bot commented Oct 13, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
langchain ✅ Ready (Inspect) Visit Preview 💬 Add feedback Nov 7, 2024 3:07am

@dosubot dosubot bot added size:L This PR changes 100-499 lines, ignoring generated files. community Related to langchain-community labels Oct 13, 2024
@efriis efriis self-assigned this Oct 16, 2024
@dosubot dosubot bot added the lgtm PR looks good. Use to confirm that a PR is ready for merging. label Nov 7, 2024
@efriis efriis enabled auto-merge (squash) November 7, 2024 02:56
@efriis efriis merged commit 53b0a99 into langchain-ai:master Nov 7, 2024
20 checks passed
yanomaly pushed a commit to yanomaly/langchain that referenced this pull request Nov 8, 2024
## Description
This PR adds support for Memcached as a usable LLM model cache by adding
the ```MemcachedCache``` implementation relying on the
[pymemcache](https://github.com/pinterest/pymemcache) client.

Unit test-wise, the new integration is generally covered under existing
import testing. All new functionality depends on pymemcache if
instantiated and used, so to comply with the other cache implementations
the PR also adds optional integration tests for ```MemcachedCache```.

Since this is a new integration, documentation is added for Memcached as
an integration and as an LLM Cache.

## Issue
This PR closes langchain-ai#27275 which was originally raised as a discussion in
langchain-ai#27035

## Dependencies
There are no new required dependencies for langchain, but
[pymemcache](https://github.com/pinterest/pymemcache) is required to
instantiate the new ```MemcachedCache```.

## Example Usage
```python3
from langchain.globals import set_llm_cache
from langchain_openai import OpenAI

from langchain_community.cache import MemcachedCache
from pymemcache.client.base import Client

llm = OpenAI(model="gpt-3.5-turbo-instruct", n=2, best_of=2)
set_llm_cache(MemcachedCache(Client('localhost')))

# The first time, it is not yet in cache, so it should take longer
llm.invoke("Which city is the most crowded city in the USA?")

# The second time it is, so it goes faster
llm.invoke("Which city is the most crowded city in the USA?")
```

---------

Co-authored-by: Erick Friis <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
community Related to langchain-community lgtm PR looks good. Use to confirm that a PR is ready for merging. size:L This PR changes 100-499 lines, ignoring generated files. utoronto
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

Add Support for Memcached as a LLM Model Cache
2 participants