-
Notifications
You must be signed in to change notification settings - Fork 16.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Harrison/standard llm interface #4615
Conversation
langchain/base_language.py
Outdated
@@ -51,6 +51,16 @@ async def agenerate_prompt( | |||
) -> LLMResult: | |||
"""Take in a list of prompt values and return an LLMResult.""" | |||
|
|||
@abstractmethod | |||
def predict(self, text: str, stop: Optional[List[str]] = None) -> str: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use * after positional args
Change list to sequence
the default should be to get rid of all List types on all inputs
langchain/base_language.py
Outdated
|
||
@abstractmethod | ||
def predict_messages( | ||
self, messages: List[BaseMessage], stop: Optional[List[str]] = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Use * after positional args
- Change list to sequence in input
langchain/chat_models/base.py
Outdated
result = self([HumanMessage(content=message)], stop=stop) | ||
return self.predict(message, stop=stop) | ||
|
||
def predict(self, text: str, stop: Optional[List[str]] = None) -> str: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same comment as above for function signature
# Add async versions of predict() and predict_messages() #4615 introduced a unifying interface for "base" and "chat" LLM models via the new `predict()` and `predict_messages()` methods that allow both types of models to operate on string and message-based inputs, respectively. This PR adds async versions of the same (`apredict()` and `apredict_messages()`) that are identical except for their use of `agenerate()` in place of `generate()`, which means they repurpose all existing work on the async backend. ## Who can review? Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested: @hwchase17 (follows his work on #4615) @agola11 (async) --------- Co-authored-by: Harrison Chase <[email protected]>
# Add async versions of predict() and predict_messages() #4615 introduced a unifying interface for "base" and "chat" LLM models via the new `predict()` and `predict_messages()` methods that allow both types of models to operate on string and message-based inputs, respectively. This PR adds async versions of the same (`apredict()` and `apredict_messages()`) that are identical except for their use of `agenerate()` in place of `generate()`, which means they repurpose all existing work on the async backend. ## Who can review? Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested: @hwchase17 (follows his work on #4615) @agola11 (async) --------- Co-authored-by: Harrison Chase <[email protected]>
No description provided.