Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

openai[patch]: default to invoke on o1 stream() #27983

Merged
merged 1 commit into from
Nov 9, 2024

Conversation

baskaryan
Copy link
Collaborator

No description provided.

@baskaryan baskaryan requested review from efriis and ccurme November 8, 2024 09:10
@dosubot dosubot bot added the size:M This PR changes 30-99 lines, ignoring generated files. label Nov 8, 2024
Copy link

vercel bot commented Nov 8, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

1 Skipped Deployment
Name Status Preview Comments Updated (UTC)
langchain ⬜️ Ignored (Inspect) Visit Preview Nov 8, 2024 9:10am

) -> bool:
if isinstance(response_format, type) and is_basemodel_subclass(response_format):
# TODO: Add support for streaming with Pydantic response_format.
warnings.warn("Streaming with Pydantic response_format not yet supported.")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we also emit a warning in case of o1, for clarity about why results aren't streamed?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe. just seems like model is something you're more likely to let end users configure (eg the way langsmith playground does) in which case warnings would be annoying and unhelpful

@dosubot dosubot bot added the lgtm PR looks good. Use to confirm that a PR is ready for merging. label Nov 8, 2024
@baskaryan baskaryan merged commit 33dbfba into master Nov 9, 2024
28 checks passed
@baskaryan baskaryan deleted the bagatur/dont_stream_o1 branch November 9, 2024 03:13
) -> bool:
if isinstance(response_format, type) and is_basemodel_subclass(response_format):
# TODO: Add support for streaming with Pydantic response_format.
warnings.warn("Streaming with Pydantic response_format not yet supported.")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI this warning is raised even when a non-streaming API is used (e.g. ainvoke). This is because _generate_with_cache (called by invoke) calls _should_stream internally:

# If stream is not explicitly set, check if implicitly requested by

I don't think it has any bad side effect, besides that the warning used to be raised for a real issue, and it may now be a false positive. However, it is possible that invoke used to stream beforehand and won't anymore 🤔

@baskaryan I'd love to know if we can safely bump if we are using ainvoke with structured_outputs + pydantic

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lgtm PR looks good. Use to confirm that a PR is ready for merging. size:M This PR changes 30-99 lines, ignoring generated files.
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

3 participants