Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[instrumentation] litellm streaming #1227

Open
mikeldking opened this issue Jan 22, 2025 · 0 comments
Open

[instrumentation] litellm streaming #1227

mikeldking opened this issue Jan 22, 2025 · 0 comments
Labels
instrumantation: litellm related to litellm and litellm proxy

Comments

@mikeldking
Copy link
Contributor

No description provided.

@mikeldking mikeldking converted this from a draft issue Jan 22, 2025
@github-project-automation github-project-automation bot moved this to 📘 Todo in phoenix Jan 22, 2025
@dosubot dosubot bot added the instrumantation: litellm related to litellm and litellm proxy label Jan 22, 2025
keyur2maru added a commit to keyur2maru/openinference that referenced this issue Feb 3, 2025
- import ModelResponseStream to handle streaming responses
- implement `_finalize_streaming_span` to process streamed tokens
- update `_acompletion_wrapper` to detect and trace streaming results
- ensure span attributes capture message roles, content, and usage stats
- properly end spans for both standard and streaming responses

This change enables instrumentation for streaming responses, ensuring
tracing captures relevant metadata for streamed tokens, resolves Arize-ai#1227.

Signed-off-by: Keyur Maru <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
instrumantation: litellm related to litellm and litellm proxy
Projects
Status: No status
Status: 📘 Todo
Development

No branches or pull requests

1 participant