-
Notifications
You must be signed in to change notification settings - Fork 574
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Inference Client] Factorize inference payload build #2601
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @hanouticelina! I'm sorry I realized to late that I commented on both _client.py
and _async_client.py
but all comments applied to both (since it's autogenerated). My main concern is about determining if a task expects binaries as input or not (see below). Let me know if you have other ideas on how to fix it. I'm half-happy about the suggested solution of expect_binary: bool
😄
|
||
def is_raw_content(inputs: Union[str, Dict[str, Any], ContentT]) -> bool: | ||
return isinstance(inputs, (bytes, Path)) or ( | ||
isinstance(inputs, str) and inputs.startswith(("http://", "https://")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is an annoying part 😕 Depending on the context, inputs.startswith(("http://", "https://"))
should lead to different behavior:
- in
image_to_text
, a url as input must be passed aspost(data=...)
so that the url is loaded and sent to the inference server - in
feature_extraction
, a url as input should be passed aspost(payload={"inputs": ...}
=> a URL is a special case of string input, but still a valid one
Since _prepare_payload
is agnostic of the task, it can't know in which case we are.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you think of modifying the method signature to
def _prepare_payload(
inputs: Union[str, Dict[str, Any], ContentT],
parameters: Optional[Dict[str, Any]],
expect_binary: bool,
)
?
For tasks that expect a binary input (image_to_*
, audio_to_*
), you pass _prepare_payload(..., expect_binary=True)
.
This way you could have a logic like this:
is_binary = isinstance(inputs, (bytes, Path))
if expect_binary and not is_binary and not isinstance(inputs, str):
raise ValueError(...) # should be a binary or at least a string (local path or url)
if expect_binary and not has_parameter:
return _InferenceInputs(raw_data=inputs)
if not expect_binary and is_binary:
raise ValueError(...) # cannot be a binary
# else set as "inputs" in a json payload
...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hum yes you're right! actually I did not update image_to_text
as we don't have any parameters or logic to either send a json payload or a raw data:
response = self.post(data=image, model=model, task="image-to-text")
output = ImageToTextOutput.parse_obj(response)
return output[0] if isinstance(output, list) else output
but of course your point is totally valid.
Having a flag seems to cover all the cases. In the beginning I though about having a Input type enum and add a input_type
arg to _prepare_payload()
but it's simpler to just use a expect_binary
flag. I don't have a better solution either for now 😕
I will fix the suggestions and I will get back to this :)
thanks @Wauplin for the review! I addressed your suggestions and I ended up adding an |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good! I have a question related to question answering and then we should be good to merge
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Been a bit picky on corner cases here but I do think it's worth it 😇
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @hanouticelina! That should make the inference client more reliable on corner cases in the future. And reduce the amount of duplicate code 😄
thanks @Wauplin! I think we're good to merge this one |
This PR is a first attempt to factorize the payload build in multiple
InferenceClient
methods. In fact, there's some repetitive logic across several methods for handling inputs and parameters, so here we introduce a new (private) helper function to factorize this logic.Key changes
_InferenceInputs
object containing the json payload and raw data if any. (I don't have a strong opinion on this, we can also return a Tuple instead).InferenceClient
andAsyncInferenceClient
.