-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add POST _unified for the inference API #3313
base: main
Are you sure you want to change the base?
Conversation
Following you can find the validation results for the APIs you have changed.
You can validate these APIs yourself by using the |
"stability": "stable", | ||
"visibility": "public", | ||
"headers": { | ||
"accept": ["text/event-stream"], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is what we have for the streaming api here: https://github.com/elastic/elasticsearch-specification/blob/main/specification/_json_spec/inference.stream_inference.json#L10
Not sure if this is correct if it's indicating that the post request accepts an event stream or that the response returns one 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Both HTTP headers are used in client requests:
- Accept tells Elasticsearch that returning event streams is OK
- Content-Type is about the request's body, which is indeed JSON here.
In other words, this is correct, thank you!
|
||
/** | ||
* Perform inference on the service using the Unified Schema | ||
* @rest_spec_name inference.unified_inference |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unified_inference
seemed to follow stream_inference
. I'm open to other ideas though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No strong opinions, this seems OK.
/** | ||
* Perform inference on the service using the Unified Schema | ||
* @rest_spec_name inference.unified_inference | ||
* @availability stack since=8.18.0 stability=stable visibility=public |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@davidkyle Double check that this is what we want
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good to me.
Following you can find the validation results for the APIs you have changed.
You can validate these APIs yourself by using the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! I answered your questions, but did not look at the contents yet. Do we have YAML tests in Elasticsearch for this feature? It would help to validate the requests.
"stability": "stable", | ||
"visibility": "public", | ||
"headers": { | ||
"accept": ["text/event-stream"], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Both HTTP headers are used in client requests:
- Accept tells Elasticsearch that returning event streams is OK
- Content-Type is about the request's body, which is indeed JSON here.
In other words, this is correct, thank you!
|
||
/** | ||
* Perform inference on the service using the Unified Schema | ||
* @rest_spec_name inference.unified_inference |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No strong opinions, this seems OK.
/** | ||
* Perform inference on the service using the Unified Schema | ||
* @rest_spec_name inference.unified_inference | ||
* @availability stack since=8.18.0 stability=stable visibility=public |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@@ -0,0 +1,214 @@ | |||
/* |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we usually try to have only the request class in the corresponding Request file, and all other types we put in the types
folder above (this is just to make it easier to maintain)
nevermind that, could we just move the main Request type to the top of the file?
/** | ||
* An object representing part of the conversation. | ||
*/ | ||
export interface Message { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is missing name
name?: string
/** | ||
* The tool call that this message is responding to. | ||
*/ | ||
tool_call_id?: string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tool_call_id?: string | |
tool_call_id?: Id |
/** | ||
* The identifier of the tool call. | ||
*/ | ||
id: string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
id: string | |
id: Id |
/** | ||
* The upper bound limit for the number of tokens that can be generated for a completion request. | ||
*/ | ||
max_completion_tokens?: number |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
max_completion_tokens?: number | |
max_completion_tokens?: long |
/** | ||
* The sampling temperature to use. | ||
*/ | ||
temperature?: number |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
temperature?: number | |
temperature?: float |
/** | ||
* Nucleus sampling, an alternative to sampling with temperature. | ||
*/ | ||
top_p?: number |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
top_p?: number | |
top_p?: float |
/** | ||
* A unique identifier for the chat completion | ||
*/ | ||
id: string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
id: string | |
id: Id |
/** | ||
* The content of the message. | ||
*/ | ||
content: string | Array<ContentObject> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unions such as this one have to be handled in a separate type for them to be understandable by the static client, so here we need a new type:
/**
* @codegen_names string, object
*/
export type MessageContent = string | Array<ContentObject>
content: MessageContent
/** | ||
* Controls which tool is called by the model. | ||
*/ | ||
tool_choice?: string | CompletionToolChoice |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as content
above, new type needed to define union:
/**
* @codegen_names string, object
*/
export type CompletionToolType = string | CompletionToolChoice
tool_choice?: CompletionToolType
@jonathan-buttner I have a question about the response format: is there a way to get it in json format, or is it just text/stream? |
Thanks for the review! Currently it's only |
Thanks for the review! Not yet, let me see if I can add some. |
the thing is: if the return type is just text then the response body should be just |
Oh sorry I might have misunderstood. Here's what Postman parses out: Where each one of those messages is valid json:
Except for the Do we still want the response to be only text? |
Postman makes it look mainly like JSON, but those are server-sent events. Is this unified API going to be used for multiple providers? OpenAI, Anthropic Claude, and Google Gemini all use server-sent events but then provide really different structures: https://til.simonwillison.net/llms/streaming-llm-apis. If yes, then clients would need to know what provider this is (in the content-type header?). There would not be a need to map the options in the spec; it's OK to treat it as an opaque string or buffer. Also, I'm afraid that more complete discussions around this will have to wait for next year. |
Yeah that makes sense
Sorry I should have provided more details when opening the PR. The hope with this API is to provide a consistent request and response schema that will be the same across multiple providers. Internally we'll handle transforming the request and response to match the individual providers and our schema. When we get a request for Anthropic we'll translate our "unified" schema into a request for Anthropic specifically and then as we get responses from Anthropic we'll translate it back into the format that fits the response schema we're proposing here. |
This PR adds the
_unified
route for the Inference API. This coincides with the elasticsearch PR: elastic/elasticsearch#117589