-
-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: reasoning_content
missing from completion's response
#8193
Comments
@V4G4X I don't think openrouter returns the value as reasoning content. @jamesbraza this seems similar to your issue re: include_reasoning How would you want us to handle this? |
I'm also having this problem. Do you have any ideas on how to deal with this issue? |
i'll pick this up today - thanks for the work on this @V4G4X |
Hi @V4G4X do you know about https://openrouter.ai/docs/api-reference/parameters#include-reasoning? This was added in #8184. I think you need to do response = completion(
model="openrouter/deepseek/deepseek-r1",
messages=messages,
include_reasoning=True
) Does this resolve your issue? |
response = acompletion(
model="openrouter/deepseek/deepseek-r1",
messages=messages,
include_reasoning=True
) returns provider_specific_fields: Dict[str, Any] = {}
if "reasoning_content" in params:
provider_specific_fields["reasoning_content"] = params["reasoning_content"]
setattr(self, "reasoning_content", params["reasoning_content"]) How do I need to handle this? |
hey @kuloud can you just file a pr to add any unmapped params to the root - this should address the concern and keep us consistent with the openai sdk, correct? |
OK, I'll make a pr for this. |
cc: @vibhavbhat |
What happened?
I see Chat UIs (like OpenRouter Chatroom) showing what the reasoning models like R1 are "thinking" before they give their output.
I want to bring that upstream (Aider).
Under verbose logging I could see each reasoning token printed for many minutes before the final output tokens came in.
Referred this to write a basic script to get a feel for what the response structure will be (new to dynamically typed development).
Do different providers have different responses?
And does Litellm not support
reasoning_content
for OpenRouter like when directly calling DeepSeek APIs?My final goal is to get both reasoning tokens and streaming working together.
So I can see what "Aider" is thinking while I wait.
But I get the following error on running the simple script above:
Relevant log output
Are you a ML Ops Team?
No
What LiteLLM version are you on ?
litellm==1.60.0
Twitter / LinkedIn details
https://www.linkedin.com/in/varungawande/
The text was updated successfully, but these errors were encountered: