Catch OpenAI parsing errors earlier #8083
Open
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Title
Catch OpenAI parsing errors earlier
Relevant issues
Helps with #7730
Type
🐛 Bug Fix (sort of)
Changes
First a quick explanation of the problem.
ChatCompletion.parse()
from theopenai
module is designed to return the raw response as a string if it cannot be parsed:However, in litellm, some functions assume that
.parse()
always returns BaseModel, for example here inlitellm/llms/openai/openai.py
:As you can see from the type signature, the function is supposed to always return
BaseModel
, but it doesn't check whetherparse()
really parsed anything.Later on, this causes tricky errors like the one in #7730, where the problem is a bad response being received but the error message is unrelated. To demonstrate this, here's a non-functioning "OpenAI API server":
Here's a sample client:
Currently, the streaming appears to be succesful, but doesn't return anything, and the one-shot version fails with a cryptic error message. Here's what it looks like with the broken server running on localhost:
To improve this, I just added a type check in functions returning
BaseModel
that useraw_response.parse()
:With my changes, both tests produce this clearer error message:
Let me know if the message should be changed or something, I'm not familiar with litellm's codebase.
[REQUIRED] Testing - Attach a screenshot of any new tests passing locally
Here's how many tests pass on my machine in the base repo:
And here's how many pass with my changes:
So no tests were broken by the change. The failing tests seem to be due to missing API keys and such.