Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Harrison/standard llm interface #4615

Merged
merged 3 commits into from
May 13, 2023
Merged

Conversation

hwchase17
Copy link
Contributor

No description provided.

@@ -51,6 +51,16 @@ async def agenerate_prompt(
) -> LLMResult:
"""Take in a list of prompt values and return an LLMResult."""

@abstractmethod
def predict(self, text: str, stop: Optional[List[str]] = None) -> str:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use * after positional args
Change list to sequence

the default should be to get rid of all List types on all inputs


@abstractmethod
def predict_messages(
self, messages: List[BaseMessage], stop: Optional[List[str]] = None
Copy link
Collaborator

@eyurtsev eyurtsev May 13, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Use * after positional args
  • Change list to sequence in input

result = self([HumanMessage(content=message)], stop=stop)
return self.predict(message, stop=stop)

def predict(self, text: str, stop: Optional[List[str]] = None) -> str:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment as above for function signature

@hwchase17 hwchase17 merged commit 6265cbf into master May 13, 2023
@hwchase17 hwchase17 deleted the harrison/standard-llm-interface branch May 13, 2023 16:05
hwchase17 added a commit that referenced this pull request May 24, 2023
# Add async versions of predict() and predict_messages()

#4615 introduced a unifying interface for "base" and "chat" LLM models
via the new `predict()` and `predict_messages()` methods that allow both
types of models to operate on string and message-based inputs,
respectively.

This PR adds async versions of the same (`apredict()` and
`apredict_messages()`) that are identical except for their use of
`agenerate()` in place of `generate()`, which means they repurpose all
existing work on the async backend.


## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:
        @hwchase17 (follows his work on #4615)
        @agola11 (async)

---------

Co-authored-by: Harrison Chase <[email protected]>
vowelparrot pushed a commit that referenced this pull request May 24, 2023
# Add async versions of predict() and predict_messages()

#4615 introduced a unifying interface for "base" and "chat" LLM models
via the new `predict()` and `predict_messages()` methods that allow both
types of models to operate on string and message-based inputs,
respectively.

This PR adds async versions of the same (`apredict()` and
`apredict_messages()`) that are identical except for their use of
`agenerate()` in place of `generate()`, which means they repurpose all
existing work on the async backend.


## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:
        @hwchase17 (follows his work on #4615)
        @agola11 (async)

---------

Co-authored-by: Harrison Chase <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants