-
Notifications
You must be signed in to change notification settings - Fork 15.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
core+partners/anthropic: Anthropic prompt caching #25644
core+partners/anthropic: Anthropic prompt caching #25644
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎ 1 Skipped Deployment
|
@efriis oof, main is running ahead real fast. my changes passed tests / lint before I decided to bump up with "Update branch". Would you mind taking a look and helping to figure out what are the next steps or how can I improve my PR to get it merged? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for the contribution! i'm very down for including cache token usage in ChatAnthropic outputs but think we'll want to make sure we do it in a future-proof/generalizable way
@@ -51,6 +51,10 @@ class UsageMetadata(TypedDict): | |||
"""Count of output (or completion) tokens.""" | |||
total_tokens: int | |||
"""Total token count.""" | |||
cache_creation_input_tokens: NotRequired[int] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i dont think we want to add this to core until at least one or two other providers support a similar feature
@property | ||
def _messages_client(self) -> Messages: | ||
if self.beta: | ||
return self._client.beta.prompt_caching.messages # type: ignore[attr-defined] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this feels more specific than just a "beta" flag indicates. are we going to update client to beta.{x}.messages
every time there's a new beta feature?
also is cache usage not returned if you use the regular client with the beta headers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, nice, it actually works. here's an example:
from langchain_anthropic import ChatAnthropic
from langchain_core.messages import HumanMessage, SystemMessage
model = ChatAnthropic(
model="claude-3-opus-20240229",
temperature=0,
extra_headers={"anthropic-beta": "prompt-caching-2024-07-31"}
)
chat = [
SystemMessage([{
"type": "text",
"text": "foo"*1000,
"cache_control": {"type": "ephemeral"},
}]),
HumanMessage("Hi"),
]
model.invoke(chat)
returning
AIMessage(content='Hello! How can I assist you today?', response_metadata={'id': 'msg_01EuihUPN9JrbzZXuZd6oEu8', 'model': 'claude-3-opus-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 8, 'output_tokens': 12, 'cache_creation_input_tokens': 1500, 'cache_read_input_tokens': 0}}, id='run-a13ecd02-d669-4028-b8a2-56e5113d2417-0', usage_metadata={'input_tokens': 8, 'output_tokens': 12, 'total_tokens': 20})
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there way to include variables in the system prompt but still includes the "cache_control": {"type": "ephemeral"} parameter?
Description: Added support for Anthropic prompt caching, see #25625
Issue: the issue # it fixes, if applicable
Dependencies: bump anthropic>=0.34.0
will fix it and add usage example to the notebook, so far here: