Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(api): unify function types #741

Merged
merged 1 commit into from
Nov 8, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions api.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,9 @@
# Shared Types

```python
from openai.types import FunctionObject, FunctionParameters
```

# Completions

Types:
Expand Down
84 changes: 72 additions & 12 deletions src/openai/resources/chat/completions.py
Original file line number Diff line number Diff line change
Expand Up @@ -137,8 +137,18 @@ def create(

[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)

response_format: An object specifying the format that the model must output. Used to enable JSON
mode.
response_format: An object specifying the format that the model must output.

Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
message the model generates is valid JSON.

**Important:** when using JSON mode, you **must** also instruct the model to
produce JSON yourself via a system or user message. Without this, the model may
generate an unending stream of whitespace until the generation reaches the token
limit, resulting in increased latency and appearance of a "stuck" request. Also
note that the message content may be partially cut off if
`finish_reason="length"`, which indicates the generation exceeded `max_tokens`
or the conversation exceeded the max context length.

seed: This feature is in Beta. If specified, our system will make a best effort to
sample deterministically, such that repeated requests with the same `seed` and
Expand Down Expand Up @@ -304,8 +314,18 @@ def create(

[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)

response_format: An object specifying the format that the model must output. Used to enable JSON
mode.
response_format: An object specifying the format that the model must output.

Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
message the model generates is valid JSON.

**Important:** when using JSON mode, you **must** also instruct the model to
produce JSON yourself via a system or user message. Without this, the model may
generate an unending stream of whitespace until the generation reaches the token
limit, resulting in increased latency and appearance of a "stuck" request. Also
note that the message content may be partially cut off if
`finish_reason="length"`, which indicates the generation exceeded `max_tokens`
or the conversation exceeded the max context length.

seed: This feature is in Beta. If specified, our system will make a best effort to
sample deterministically, such that repeated requests with the same `seed` and
Expand Down Expand Up @@ -464,8 +484,18 @@ def create(

[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)

response_format: An object specifying the format that the model must output. Used to enable JSON
mode.
response_format: An object specifying the format that the model must output.

Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
message the model generates is valid JSON.

**Important:** when using JSON mode, you **must** also instruct the model to
produce JSON yourself via a system or user message. Without this, the model may
generate an unending stream of whitespace until the generation reaches the token
limit, resulting in increased latency and appearance of a "stuck" request. Also
note that the message content may be partially cut off if
`finish_reason="length"`, which indicates the generation exceeded `max_tokens`
or the conversation exceeded the max context length.

seed: This feature is in Beta. If specified, our system will make a best effort to
sample deterministically, such that repeated requests with the same `seed` and
Expand Down Expand Up @@ -704,8 +734,18 @@ async def create(

[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)

response_format: An object specifying the format that the model must output. Used to enable JSON
mode.
response_format: An object specifying the format that the model must output.

Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
message the model generates is valid JSON.

**Important:** when using JSON mode, you **must** also instruct the model to
produce JSON yourself via a system or user message. Without this, the model may
generate an unending stream of whitespace until the generation reaches the token
limit, resulting in increased latency and appearance of a "stuck" request. Also
note that the message content may be partially cut off if
`finish_reason="length"`, which indicates the generation exceeded `max_tokens`
or the conversation exceeded the max context length.

seed: This feature is in Beta. If specified, our system will make a best effort to
sample deterministically, such that repeated requests with the same `seed` and
Expand Down Expand Up @@ -871,8 +911,18 @@ async def create(

[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)

response_format: An object specifying the format that the model must output. Used to enable JSON
mode.
response_format: An object specifying the format that the model must output.

Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
message the model generates is valid JSON.

**Important:** when using JSON mode, you **must** also instruct the model to
produce JSON yourself via a system or user message. Without this, the model may
generate an unending stream of whitespace until the generation reaches the token
limit, resulting in increased latency and appearance of a "stuck" request. Also
note that the message content may be partially cut off if
`finish_reason="length"`, which indicates the generation exceeded `max_tokens`
or the conversation exceeded the max context length.

seed: This feature is in Beta. If specified, our system will make a best effort to
sample deterministically, such that repeated requests with the same `seed` and
Expand Down Expand Up @@ -1031,8 +1081,18 @@ async def create(

[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)

response_format: An object specifying the format that the model must output. Used to enable JSON
mode.
response_format: An object specifying the format that the model must output.

Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
message the model generates is valid JSON.

**Important:** when using JSON mode, you **must** also instruct the model to
produce JSON yourself via a system or user message. Without this, the model may
generate an unending stream of whitespace until the generation reaches the token
limit, resulting in increased latency and appearance of a "stuck" request. Also
note that the message content may be partially cut off if
`finish_reason="length"`, which indicates the generation exceeded `max_tokens`
or the conversation exceeded the max context length.

seed: This feature is in Beta. If specified, our system will make a best effort to
sample deterministically, such that repeated requests with the same `seed` and
Expand Down
2 changes: 2 additions & 0 deletions src/openai/types/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,8 @@
from .edit import Edit as Edit
from .image import Image as Image
from .model import Model as Model
from .shared import FunctionObject as FunctionObject
from .shared import FunctionParameters as FunctionParameters
from .embedding import Embedding as Embedding
from .fine_tune import FineTune as FineTune
from .completion import Completion as Completion
Expand Down
35 changes: 4 additions & 31 deletions src/openai/types/beta/assistant.py
Original file line number Diff line number Diff line change
@@ -1,12 +1,13 @@
# File generated from our OpenAPI spec by Stainless.

import builtins
from typing import Dict, List, Union, Optional
from typing import List, Union, Optional
from typing_extensions import Literal

from ..shared import FunctionObject
from ..._models import BaseModel

__all__ = ["Assistant", "Tool", "ToolCodeInterpreter", "ToolRetrieval", "ToolFunction", "ToolFunctionFunction"]
__all__ = ["Assistant", "Tool", "ToolCodeInterpreter", "ToolRetrieval", "ToolFunction"]


class ToolCodeInterpreter(BaseModel):
Expand All @@ -19,36 +20,8 @@ class ToolRetrieval(BaseModel):
"""The type of tool being defined: `retrieval`"""


class ToolFunctionFunction(BaseModel):
description: str
"""
A description of what the function does, used by the model to choose when and
how to call the function.
"""

name: str
"""The name of the function to be called.

Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length
of 64.
"""

parameters: Dict[str, builtins.object]
"""The parameters the functions accepts, described as a JSON Schema object.

See the [guide](https://platform.openai.com/docs/guides/gpt/function-calling)
for examples, and the
[JSON Schema reference](https://json-schema.org/understanding-json-schema/) for
documentation about the format.

To describe a function that accepts no parameters, provide the value
`{"type": "object", "properties": {}}`.
"""


class ToolFunction(BaseModel):
function: ToolFunctionFunction
"""The function definition."""
function: FunctionObject

type: Literal["function"]
"""The type of tool being defined: `function`"""
Expand Down
35 changes: 4 additions & 31 deletions src/openai/types/beta/assistant_create_params.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,16 +2,17 @@

from __future__ import annotations

from typing import Dict, List, Union, Optional
from typing import List, Union, Optional
from typing_extensions import Literal, Required, TypedDict

from ...types import shared_params

__all__ = [
"AssistantCreateParams",
"Tool",
"ToolAssistantToolsCode",
"ToolAssistantToolsRetrieval",
"ToolAssistantToolsFunction",
"ToolAssistantToolsFunctionFunction",
]


Expand Down Expand Up @@ -71,36 +72,8 @@ class ToolAssistantToolsRetrieval(TypedDict, total=False):
"""The type of tool being defined: `retrieval`"""


class ToolAssistantToolsFunctionFunction(TypedDict, total=False):
description: Required[str]
"""
A description of what the function does, used by the model to choose when and
how to call the function.
"""

name: Required[str]
"""The name of the function to be called.

Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length
of 64.
"""

parameters: Required[Dict[str, object]]
"""The parameters the functions accepts, described as a JSON Schema object.

See the [guide](https://platform.openai.com/docs/guides/gpt/function-calling)
for examples, and the
[JSON Schema reference](https://json-schema.org/understanding-json-schema/) for
documentation about the format.

To describe a function that accepts no parameters, provide the value
`{"type": "object", "properties": {}}`.
"""


class ToolAssistantToolsFunction(TypedDict, total=False):
function: Required[ToolAssistantToolsFunctionFunction]
"""The function definition."""
function: Required[shared_params.FunctionObject]

type: Required[Literal["function"]]
"""The type of tool being defined: `function`"""
Expand Down
35 changes: 4 additions & 31 deletions src/openai/types/beta/assistant_update_params.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,16 +2,17 @@

from __future__ import annotations

from typing import Dict, List, Union, Optional
from typing import List, Union, Optional
from typing_extensions import Literal, Required, TypedDict

from ...types import shared_params

__all__ = [
"AssistantUpdateParams",
"Tool",
"ToolAssistantToolsCode",
"ToolAssistantToolsRetrieval",
"ToolAssistantToolsFunction",
"ToolAssistantToolsFunctionFunction",
]


Expand Down Expand Up @@ -73,36 +74,8 @@ class ToolAssistantToolsRetrieval(TypedDict, total=False):
"""The type of tool being defined: `retrieval`"""


class ToolAssistantToolsFunctionFunction(TypedDict, total=False):
description: Required[str]
"""
A description of what the function does, used by the model to choose when and
how to call the function.
"""

name: Required[str]
"""The name of the function to be called.

Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length
of 64.
"""

parameters: Required[Dict[str, object]]
"""The parameters the functions accepts, described as a JSON Schema object.

See the [guide](https://platform.openai.com/docs/guides/gpt/function-calling)
for examples, and the
[JSON Schema reference](https://json-schema.org/understanding-json-schema/) for
documentation about the format.

To describe a function that accepts no parameters, provide the value
`{"type": "object", "properties": {}}`.
"""


class ToolAssistantToolsFunction(TypedDict, total=False):
function: Required[ToolAssistantToolsFunctionFunction]
"""The function definition."""
function: Required[shared_params.FunctionObject]

type: Required[Literal["function"]]
"""The type of tool being defined: `function`"""
Expand Down
35 changes: 4 additions & 31 deletions src/openai/types/beta/thread_create_and_run_params.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,11 @@

from __future__ import annotations

from typing import Dict, List, Union, Optional
from typing import List, Union, Optional
from typing_extensions import Literal, Required, TypedDict

from ...types import shared_params

__all__ = [
"ThreadCreateAndRunParams",
"Thread",
Expand All @@ -13,7 +15,6 @@
"ToolAssistantToolsCode",
"ToolAssistantToolsRetrieval",
"ToolAssistantToolsFunction",
"ToolAssistantToolsFunctionFunction",
]


Expand Down Expand Up @@ -110,36 +111,8 @@ class ToolAssistantToolsRetrieval(TypedDict, total=False):
"""The type of tool being defined: `retrieval`"""


class ToolAssistantToolsFunctionFunction(TypedDict, total=False):
description: Required[str]
"""
A description of what the function does, used by the model to choose when and
how to call the function.
"""

name: Required[str]
"""The name of the function to be called.

Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length
of 64.
"""

parameters: Required[Dict[str, object]]
"""The parameters the functions accepts, described as a JSON Schema object.

See the [guide](https://platform.openai.com/docs/guides/gpt/function-calling)
for examples, and the
[JSON Schema reference](https://json-schema.org/understanding-json-schema/) for
documentation about the format.

To describe a function that accepts no parameters, provide the value
`{"type": "object", "properties": {}}`.
"""


class ToolAssistantToolsFunction(TypedDict, total=False):
function: Required[ToolAssistantToolsFunctionFunction]
"""The function definition."""
function: Required[shared_params.FunctionObject]

type: Required[Literal["function"]]
"""The type of tool being defined: `function`"""
Expand Down
Loading