From d5a9753ddf682ffb9eefca92aeec8f94202cca2f Mon Sep 17 00:00:00 2001 From: Drew Robbins Date: Sun, 28 Jan 2024 12:16:00 -0800 Subject: [PATCH 1/4] Add OpenAI metrics --- docs/ai/README.md | 3 +- docs/ai/openai-metrics.md | 375 +++++++++++++++++++++++++++++++++ model/metrics/llm-metrics.yaml | 109 ++++++++++ model/registry/llm.yaml | 9 + 4 files changed, 495 insertions(+), 1 deletion(-) create mode 100644 docs/ai/openai-metrics.md create mode 100644 model/metrics/llm-metrics.yaml diff --git a/docs/ai/README.md b/docs/ai/README.md index 855503f97c..bf83b94856 100644 --- a/docs/ai/README.md +++ b/docs/ai/README.md @@ -19,6 +19,7 @@ Semantic conventions for LLM operations are defined for the following signals: Technology specific semantic conventions are defined for the following LLM providers: -* [OpenAI](openai.md): Semantic Conventions for *OpenAI*. +* [OpenAI](openai.md): Semantic Conventions for *OpenAI* spans. +* [OpenAI Metrics](openai-metrics.md): Semantic Conventions for *OpenAI* metrics. [DocumentStatus]: https://github.com/open-telemetry/opentelemetry-specification/tree/v1.26.0/specification/document-status.md \ No newline at end of file diff --git a/docs/ai/openai-metrics.md b/docs/ai/openai-metrics.md new file mode 100644 index 0000000000..5b231da602 --- /dev/null +++ b/docs/ai/openai-metrics.md @@ -0,0 +1,375 @@ + + +# Semantic Conventions for OpenAI Matrics + +**Status**: [Experimental][DocumentStatus] + +This document defines semantic conventions for OpenAI client metrics. + + + + + +- [Chat completions](#chat-completions) + * [Metric: `openai.chat_completions.tokens`](#metric-openaichat_completionstokens) + * [Metric: `openai.chat_completions.choices`](#metric-openaichat_completionschoices) + * [Metric: `openai.chat_completions.duration`](#metric-openaichat_completionsduration) +- [Embeddings](#embeddings) + * [Metric: `openai.embeddings.tokens`](#metric-openaiembeddingstokens) + * [Metric: `openai.embeddings.vector_size`](#metric-openaiembeddingsvector_size) + * [Metric: `openai.embeddings.duration`](#metric-openaiembeddingsduration) +- [Image generation](#image-generation) + * [Metric: `openai.image_generations.duration`](#metric-openaiimage_generationsduration) + + + +## Chat completions + +### Metric: `openai.chat_completions.tokens` + +**Status**: [Experimental][DocumentStatus] + +This metric is required. + + +| Name | Instrument Type | Unit (UCUM) | Description | +| -------- | --------------- | ----------- | -------------- | +| `llm.openai.chat_completions.tokens` | Counter | `token` | Number of tokens used in prompt and completions. | + + + +| Attribute | Type | Description | Examples | Requirement Level | +|---|---|---|---|---| +| [`error.type`](../attributes-registry/error.md) | string | Describes a class of error the operation ended with. [1] | `timeout`; `java.net.UnknownHostException`; `server_certificate_invalid`; `500` | Conditionally Required: if the operation ended in error | +| [`llm.response.model`](../attributes-registry/llm.md) | string | The name of the LLM a response is being made to. | `gpt-4-0613` | Required | +| [`llm.usage.token_type`](../attributes-registry/llm.md) | string | The type of token. | `prompt` | Recommended | +| [`server.address`](../attributes-registry/server.md) | string | Server domain name if available without reverse DNS lookup; otherwise, IP address or Unix domain socket name. [2] | `example.com`; `10.1.2.80`; `/tmp/my.sock` | Required | + +**[1]:** The `error.type` SHOULD be predictable and SHOULD have low cardinality. +Instrumentations SHOULD document the list of errors they report. + +The cardinality of `error.type` within one instrumentation library SHOULD be low. +Telemetry consumers that aggregate data from multiple instrumentation libraries and applications +should be prepared for `error.type` to have high cardinality at query time when no +additional filters are applied. + +If the operation has completed successfully, instrumentations SHOULD NOT set `error.type`. + +If a specific domain defines its own set of error identifiers (such as HTTP or gRPC status codes), +it's RECOMMENDED to: + +* Use a domain-specific attribute +* Set `error.type` to capture all errors, regardless of whether they are defined within the domain-specific set or not. + +**[2]:** When observed from the client side, and when communicating through an intermediary, `server.address` SHOULD represent the server address behind any intermediaries, for example proxies, if it's available. + +`error.type` has the following list of well-known values. If one of them applies, then the respective value MUST be used, otherwise a custom value MAY be used. + +| Value | Description | +|---|---| +| `_OTHER` | A fallback error value to be used when the instrumentation doesn't define a custom value. | + +`llm.usage.token_type` MUST be one of the following: + +| Value | Description | +|---|---| +| `prompt` | prompt | +| `completion` | completion | + + +### Metric: `openai.chat_completions.choices` + +**Status**: [Experimental][DocumentStatus] + +This metric is required. + + +| Name | Instrument Type | Unit (UCUM) | Description | +| -------- | --------------- | ----------- | -------------- | +| `llm.openai.chat_completions.choices` | Counter | `choice` | Number of choices returned by chat completions call | + + + +| Attribute | Type | Description | Examples | Requirement Level | +|---|---|---|---|---| +| [`error.type`](../attributes-registry/error.md) | string | Describes a class of error the operation ended with. [1] | `timeout`; `java.net.UnknownHostException`; `server_certificate_invalid`; `500` | Conditionally Required: if the operation ended in error | +| [`llm.response.finish_reason`](../attributes-registry/llm.md) | string | The reason the model stopped generating tokens. | `stop` | Recommended | +| [`llm.response.model`](../attributes-registry/llm.md) | string | The name of the LLM a response is being made to. | `gpt-4-0613` | Required | +| [`server.address`](../attributes-registry/server.md) | string | Server domain name if available without reverse DNS lookup; otherwise, IP address or Unix domain socket name. [2] | `example.com`; `10.1.2.80`; `/tmp/my.sock` | Required | + +**[1]:** The `error.type` SHOULD be predictable and SHOULD have low cardinality. +Instrumentations SHOULD document the list of errors they report. + +The cardinality of `error.type` within one instrumentation library SHOULD be low. +Telemetry consumers that aggregate data from multiple instrumentation libraries and applications +should be prepared for `error.type` to have high cardinality at query time when no +additional filters are applied. + +If the operation has completed successfully, instrumentations SHOULD NOT set `error.type`. + +If a specific domain defines its own set of error identifiers (such as HTTP or gRPC status codes), +it's RECOMMENDED to: + +* Use a domain-specific attribute +* Set `error.type` to capture all errors, regardless of whether they are defined within the domain-specific set or not. + +**[2]:** When observed from the client side, and when communicating through an intermediary, `server.address` SHOULD represent the server address behind any intermediaries, for example proxies, if it's available. + +`error.type` has the following list of well-known values. If one of them applies, then the respective value MUST be used, otherwise a custom value MAY be used. + +| Value | Description | +|---|---| +| `_OTHER` | A fallback error value to be used when the instrumentation doesn't define a custom value. | + + + +### Metric: `openai.chat_completions.duration` + +**Status**: [Experimental][DocumentStatus] + +This metric is required. + +This metric SHOULD be specified with +[`ExplicitBucketBoundaries`](https://github.com/open-telemetry/opentelemetry-specification/tree/v1.22.0/specification/metrics/api.md#instrument-advice) +of `[ 0, 0.005, 0.01, 0.025, 0.05, 0.075, 0.1, 0.25, 0.5, 0.75, 1, 2.5, 5, 7.5, 10 ]`. + + +| Name | Instrument Type | Unit (UCUM) | Description | +| -------- | --------------- | ----------- | -------------- | +| `llm.openai.chat_completions.duration` | Histogram | `s` | Duration of chat completion operation | + + + +| Attribute | Type | Description | Examples | Requirement Level | +|---|---|---|---|---| +| [`error.type`](../attributes-registry/error.md) | string | Describes a class of error the operation ended with. [1] | `timeout`; `java.net.UnknownHostException`; `server_certificate_invalid`; `500` | Conditionally Required: if the operation ended in error | +| [`llm.response.finish_reason`](../attributes-registry/llm.md) | string | The reason the model stopped generating tokens. | `stop` | Recommended | +| [`llm.response.model`](../attributes-registry/llm.md) | string | The name of the LLM a response is being made to. | `gpt-4-0613` | Required | +| [`server.address`](../attributes-registry/server.md) | string | Server domain name if available without reverse DNS lookup; otherwise, IP address or Unix domain socket name. [2] | `example.com`; `10.1.2.80`; `/tmp/my.sock` | Required | + +**[1]:** The `error.type` SHOULD be predictable and SHOULD have low cardinality. +Instrumentations SHOULD document the list of errors they report. + +The cardinality of `error.type` within one instrumentation library SHOULD be low. +Telemetry consumers that aggregate data from multiple instrumentation libraries and applications +should be prepared for `error.type` to have high cardinality at query time when no +additional filters are applied. + +If the operation has completed successfully, instrumentations SHOULD NOT set `error.type`. + +If a specific domain defines its own set of error identifiers (such as HTTP or gRPC status codes), +it's RECOMMENDED to: + +* Use a domain-specific attribute +* Set `error.type` to capture all errors, regardless of whether they are defined within the domain-specific set or not. + +**[2]:** When observed from the client side, and when communicating through an intermediary, `server.address` SHOULD represent the server address behind any intermediaries, for example proxies, if it's available. + +`error.type` has the following list of well-known values. If one of them applies, then the respective value MUST be used, otherwise a custom value MAY be used. + +| Value | Description | +|---|---| +| `_OTHER` | A fallback error value to be used when the instrumentation doesn't define a custom value. | + + +## Embeddings + +### Metric: `openai.embeddings.tokens` + +**Status**: [Experimental][DocumentStatus] + +This metric is required. + + +| Name | Instrument Type | Unit (UCUM) | Description | +| -------- | --------------- | ----------- | -------------- | +| `llm.openai.embeddings.tokens` | Counter | `token` | Number of tokens used in prompt and completions. | + + + +| Attribute | Type | Description | Examples | Requirement Level | +|---|---|---|---|---| +| [`error.type`](../attributes-registry/error.md) | string | Describes a class of error the operation ended with. [1] | `timeout`; `java.net.UnknownHostException`; `server_certificate_invalid`; `500` | Conditionally Required: if the operation ended in error | +| [`llm.response.model`](../attributes-registry/llm.md) | string | The name of the LLM a response is being made to. | `gpt-4-0613` | Required | +| [`llm.usage.token_type`](../attributes-registry/llm.md) | string | The type of token. | `prompt` | Recommended | +| [`server.address`](../attributes-registry/server.md) | string | Server domain name if available without reverse DNS lookup; otherwise, IP address or Unix domain socket name. [2] | `example.com`; `10.1.2.80`; `/tmp/my.sock` | Required | + +**[1]:** The `error.type` SHOULD be predictable and SHOULD have low cardinality. +Instrumentations SHOULD document the list of errors they report. + +The cardinality of `error.type` within one instrumentation library SHOULD be low. +Telemetry consumers that aggregate data from multiple instrumentation libraries and applications +should be prepared for `error.type` to have high cardinality at query time when no +additional filters are applied. + +If the operation has completed successfully, instrumentations SHOULD NOT set `error.type`. + +If a specific domain defines its own set of error identifiers (such as HTTP or gRPC status codes), +it's RECOMMENDED to: + +* Use a domain-specific attribute +* Set `error.type` to capture all errors, regardless of whether they are defined within the domain-specific set or not. + +**[2]:** When observed from the client side, and when communicating through an intermediary, `server.address` SHOULD represent the server address behind any intermediaries, for example proxies, if it's available. + +`error.type` has the following list of well-known values. If one of them applies, then the respective value MUST be used, otherwise a custom value MAY be used. + +| Value | Description | +|---|---| +| `_OTHER` | A fallback error value to be used when the instrumentation doesn't define a custom value. | + +`llm.usage.token_type` MUST be one of the following: + +| Value | Description | +|---|---| +| `prompt` | prompt | +| `completion` | completion | + + +### Metric: `openai.embeddings.vector_size` + +**Status**: [Experimental][DocumentStatus] + +This metric is required. + + +| Name | Instrument Type | Unit (UCUM) | Description | +| -------- | --------------- | ----------- | -------------- | +| `llm.openai.embeddings.vector_size` | Counter | `element` | he size of returned vector. | + + + +| Attribute | Type | Description | Examples | Requirement Level | +|---|---|---|---|---| +| [`error.type`](../attributes-registry/error.md) | string | Describes a class of error the operation ended with. [1] | `timeout`; `java.net.UnknownHostException`; `server_certificate_invalid`; `500` | Conditionally Required: if the operation ended in error | +| [`llm.response.model`](../attributes-registry/llm.md) | string | The name of the LLM a response is being made to. | `gpt-4-0613` | Required | +| [`server.address`](../attributes-registry/server.md) | string | Server domain name if available without reverse DNS lookup; otherwise, IP address or Unix domain socket name. [2] | `example.com`; `10.1.2.80`; `/tmp/my.sock` | Required | + +**[1]:** The `error.type` SHOULD be predictable and SHOULD have low cardinality. +Instrumentations SHOULD document the list of errors they report. + +The cardinality of `error.type` within one instrumentation library SHOULD be low. +Telemetry consumers that aggregate data from multiple instrumentation libraries and applications +should be prepared for `error.type` to have high cardinality at query time when no +additional filters are applied. + +If the operation has completed successfully, instrumentations SHOULD NOT set `error.type`. + +If a specific domain defines its own set of error identifiers (such as HTTP or gRPC status codes), +it's RECOMMENDED to: + +* Use a domain-specific attribute +* Set `error.type` to capture all errors, regardless of whether they are defined within the domain-specific set or not. + +**[2]:** When observed from the client side, and when communicating through an intermediary, `server.address` SHOULD represent the server address behind any intermediaries, for example proxies, if it's available. + +`error.type` has the following list of well-known values. If one of them applies, then the respective value MUST be used, otherwise a custom value MAY be used. + +| Value | Description | +|---|---| +| `_OTHER` | A fallback error value to be used when the instrumentation doesn't define a custom value. | + + +### Metric: `openai.embeddings.duration` + +**Status**: [Experimental][DocumentStatus] + +This metric is required. + +This metric SHOULD be specified with +[`ExplicitBucketBoundaries`](https://github.com/open-telemetry/opentelemetry-specification/tree/v1.22.0/specification/metrics/api.md#instrument-advice) +of `[ 0, 0.005, 0.01, 0.025, 0.05, 0.075, 0.1, 0.25, 0.5, 0.75, 1, 2.5, 5, 7.5, 10 ]`. + + +| Name | Instrument Type | Unit (UCUM) | Description | +| -------- | --------------- | ----------- | -------------- | +| `llm.openai.embeddings.duration` | Histogram | `s` | Duration of embeddings operation | + + + +| Attribute | Type | Description | Examples | Requirement Level | +|---|---|---|---|---| +| [`error.type`](../attributes-registry/error.md) | string | Describes a class of error the operation ended with. [1] | `timeout`; `java.net.UnknownHostException`; `server_certificate_invalid`; `500` | Conditionally Required: if the operation ended in error | +| [`llm.response.model`](../attributes-registry/llm.md) | string | The name of the LLM a response is being made to. | `gpt-4-0613` | Required | +| [`server.address`](../attributes-registry/server.md) | string | Server domain name if available without reverse DNS lookup; otherwise, IP address or Unix domain socket name. [2] | `example.com`; `10.1.2.80`; `/tmp/my.sock` | Required | + +**[1]:** The `error.type` SHOULD be predictable and SHOULD have low cardinality. +Instrumentations SHOULD document the list of errors they report. + +The cardinality of `error.type` within one instrumentation library SHOULD be low. +Telemetry consumers that aggregate data from multiple instrumentation libraries and applications +should be prepared for `error.type` to have high cardinality at query time when no +additional filters are applied. + +If the operation has completed successfully, instrumentations SHOULD NOT set `error.type`. + +If a specific domain defines its own set of error identifiers (such as HTTP or gRPC status codes), +it's RECOMMENDED to: + +* Use a domain-specific attribute +* Set `error.type` to capture all errors, regardless of whether they are defined within the domain-specific set or not. + +**[2]:** When observed from the client side, and when communicating through an intermediary, `server.address` SHOULD represent the server address behind any intermediaries, for example proxies, if it's available. + +`error.type` has the following list of well-known values. If one of them applies, then the respective value MUST be used, otherwise a custom value MAY be used. + +| Value | Description | +|---|---| +| `_OTHER` | A fallback error value to be used when the instrumentation doesn't define a custom value. | + + +[DocumentStatus]: https://github.com/open-telemetry/opentelemetry-specification/tree/v1.22.0/specification/document-status.md + +## Image generation + +### Metric: `openai.image_generations.duration` + +**Status**: [Experimental][DocumentStatus] + +This metric is required. + +This metric SHOULD be specified with +[`ExplicitBucketBoundaries`](https://github.com/open-telemetry/opentelemetry-specification/tree/v1.22.0/specification/metrics/api.md#instrument-advice) +of `[ 0, 0.005, 0.01, 0.025, 0.05, 0.075, 0.1, 0.25, 0.5, 0.75, 1, 2.5, 5, 7.5, 10 ]`. + + +| Name | Instrument Type | Unit (UCUM) | Description | +| -------- | --------------- | ----------- | -------------- | +| `llm.openai.image_generations.duration` | Histogram | `s` | Duration of image generations operation | + + + +| Attribute | Type | Description | Examples | Requirement Level | +|---|---|---|---|---| +| [`error.type`](../attributes-registry/error.md) | string | Describes a class of error the operation ended with. [1] | `timeout`; `java.net.UnknownHostException`; `server_certificate_invalid`; `500` | Recommended | +| [`llm.response.model`](../attributes-registry/llm.md) | string | The name of the LLM a response is being made to. | `gpt-4-0613` | Required | +| [`server.address`](../attributes-registry/server.md) | string | Server domain name if available without reverse DNS lookup; otherwise, IP address or Unix domain socket name. [2] | `example.com`; `10.1.2.80`; `/tmp/my.sock` | Required | + +**[1]:** The `error.type` SHOULD be predictable and SHOULD have low cardinality. +Instrumentations SHOULD document the list of errors they report. + +The cardinality of `error.type` within one instrumentation library SHOULD be low. +Telemetry consumers that aggregate data from multiple instrumentation libraries and applications +should be prepared for `error.type` to have high cardinality at query time when no +additional filters are applied. + +If the operation has completed successfully, instrumentations SHOULD NOT set `error.type`. + +If a specific domain defines its own set of error identifiers (such as HTTP or gRPC status codes), +it's RECOMMENDED to: + +* Use a domain-specific attribute +* Set `error.type` to capture all errors, regardless of whether they are defined within the domain-specific set or not. + +**[2]:** When observed from the client side, and when communicating through an intermediary, `server.address` SHOULD represent the server address behind any intermediaries, for example proxies, if it's available. + +`error.type` has the following list of well-known values. If one of them applies, then the respective value MUST be used, otherwise a custom value MAY be used. + +| Value | Description | +|---|---| +| `_OTHER` | A fallback error value to be used when the instrumentation doesn't define a custom value. | + + +[DocumentStatus]: https://github.com/open-telemetry/opentelemetry-specification/tree/v1.22.0/specification/document-status.md \ No newline at end of file diff --git a/model/metrics/llm-metrics.yaml b/model/metrics/llm-metrics.yaml new file mode 100644 index 0000000000..2ca1ff3b41 --- /dev/null +++ b/model/metrics/llm-metrics.yaml @@ -0,0 +1,109 @@ +groups: + - id: metric.openai.chat_completions.tokens + type: metric + metric_name: llm.openai.chat_completions.tokens + brief: "Number of tokens used in prompt and completions." + instrument: counter + unit: "token" + stability: experimental + attributes: + - ref: llm.response.model + requirement_level: required + - ref: error.type + requirement_level: + conditionally_required: "if the operation ended in error" + - ref: llm.usage.token_type + - ref: server.address + requirement_level: required + - id: metric.openai.chat_completions.choices + type: metric + metric_name: llm.openai.chat_completions.choices + brief: "Number of choices returned by chat completions call" + instrument: counter + unit: "choice" + stability: experimental + attributes: + - ref: llm.response.model + requirement_level: required + - ref: error.type + requirement_level: + conditionally_required: "if the operation ended in error" + - ref: llm.response.finish_reason + - ref: server.address + requirement_level: required + - id: metric.openai.chat_completions.duration + type: metric + metric_name: llm.openai.chat_completions.duration + brief: "Duration of chat completion operation" + instrument: histogram + unit: 's' + stability: experimental + attributes: + - ref: llm.response.model + requirement_level: required + - ref: error.type + requirement_level: + conditionally_required: "if the operation ended in error" + - ref: llm.response.finish_reason + - ref: server.address + requirement_level: required + - id: metric.openai.embeddings.tokens + type: metric + metric_name: llm.openai.embeddings.tokens + brief: "Number of tokens used in prompt and completions." + instrument: counter + unit: "token" + stability: experimental + attributes: + - ref: llm.response.model + requirement_level: required + - ref: error.type + requirement_level: + conditionally_required: "if the operation ended in error" + - ref: llm.usage.token_type + - ref: server.address + requirement_level: required + - id: metric.openai.embeddings.vector_size + type: metric + metric_name: llm.openai.embeddings.vector_size + brief: "he size of returned vector." + instrument: counter + unit: "element" + stability: experimental + attributes: + - ref: llm.response.model + requirement_level: required + - ref: error.type + requirement_level: + conditionally_required: "if the operation ended in error" + - ref: server.address + requirement_level: required + - id: metric.openai.embeddings.duration + type: metric + metric_name: llm.openai.embeddings.duration + brief: "Duration of embeddings operation" + instrument: histogram + unit: 's' + stability: experimental + attributes: + - ref: llm.response.model + requirement_level: required + - ref: error.type + requirement_level: + conditionally_required: "if the operation ended in error" + - ref: server.address + requirement_level: required + - id: metric.openai.image_generations.duration + type: metric + metric_name: llm.openai.image_generations.duration + brief: "Duration of image generations operation" + instrument: histogram + unit: 's' + stability: experimental + attributes: + - ref: llm.response.model + requirement_level: required + - ref: error.type + conditionally_required: "if the operation ended in error" + - ref: server.address + requirement_level: required \ No newline at end of file diff --git a/model/registry/llm.yaml b/model/registry/llm.yaml index d45bad3368..1f59626ef4 100644 --- a/model/registry/llm.yaml +++ b/model/registry/llm.yaml @@ -55,6 +55,15 @@ groups: brief: The reason the model stopped generating tokens. examples: ['stop'] tag: llm-generic-response + - id: usage.token_type + type: + members: + - id: prompt + value: 'prompt' + - id: completion + value: 'completion' + brief: The type of token. + examples: ['prompt'] - id: usage.prompt_tokens type: int brief: The number of tokens used in the LLM prompt. From 0ef1c1b190a811520b0ce332ef5e5c4e1847e0f0 Mon Sep 17 00:00:00 2001 From: Drew Robbins Date: Mon, 29 Jan 2024 02:16:02 +0000 Subject: [PATCH 2/4] Fix linting errors --- docs/ai/README.md | 2 +- docs/ai/llm-spans.md | 7 ++++--- docs/ai/openai-metrics.md | 5 +---- docs/ai/openai.md | 26 +++++++++++++------------- 4 files changed, 19 insertions(+), 21 deletions(-) diff --git a/docs/ai/README.md b/docs/ai/README.md index bf83b94856..d5d51dcd75 100644 --- a/docs/ai/README.md +++ b/docs/ai/README.md @@ -22,4 +22,4 @@ Technology specific semantic conventions are defined for the following LLM provi * [OpenAI](openai.md): Semantic Conventions for *OpenAI* spans. * [OpenAI Metrics](openai-metrics.md): Semantic Conventions for *OpenAI* metrics. -[DocumentStatus]: https://github.com/open-telemetry/opentelemetry-specification/tree/v1.26.0/specification/document-status.md \ No newline at end of file +[DocumentStatus]: https://github.com/open-telemetry/opentelemetry-specification/tree/v1.26.0/specification/document-status.md diff --git a/docs/ai/llm-spans.md b/docs/ai/llm-spans.md index 19c4162321..12884056f9 100644 --- a/docs/ai/llm-spans.md +++ b/docs/ai/llm-spans.md @@ -10,9 +10,10 @@ linkTitle: LLM Calls -- [LLM Request attributes](#llm-request-attributes) - [Configuration](#configuration) -- [Semantic Conventions for specific LLM technologies](#semantic-conventions-for-specific-llm-technologies) +- [LLM Request attributes](#llm-request-attributes) +- [LLM Response attributes](#llm-response-attributes) +- [Events](#events) @@ -96,4 +97,4 @@ In the lifetime of an LLM span, an event for prompts sent and completions receiv | `llm.completion` | string | The full response string from an LLM. If the LLM responds with a more complex output like a JSON object made up of several pieces (such as OpenAI's message choices), this field is the content of the response. If the LLM produces multiple responses, then this field is left blank, and each response is instead captured in an event determined by the specific LLM technology semantic convention.| `Why did the developer stop using OpenTelemetry? Because they couldn't trace their steps!` | Recommended | -[DocumentStatus]: https://github.com/open-telemetry/opentelemetry-specification/tree/v1.22.0/specification/document-status.md \ No newline at end of file +[DocumentStatus]: https://github.com/open-telemetry/opentelemetry-specification/tree/v1.22.0/specification/document-status.md diff --git a/docs/ai/openai-metrics.md b/docs/ai/openai-metrics.md index 5b231da602..656148318a 100644 --- a/docs/ai/openai-metrics.md +++ b/docs/ai/openai-metrics.md @@ -124,7 +124,6 @@ it's RECOMMENDED to: | `_OTHER` | A fallback error value to be used when the instrumentation doesn't define a custom value. | - ### Metric: `openai.chat_completions.duration` **Status**: [Experimental][DocumentStatus] @@ -320,8 +319,6 @@ it's RECOMMENDED to: | `_OTHER` | A fallback error value to be used when the instrumentation doesn't define a custom value. | -[DocumentStatus]: https://github.com/open-telemetry/opentelemetry-specification/tree/v1.22.0/specification/document-status.md - ## Image generation ### Metric: `openai.image_generations.duration` @@ -372,4 +369,4 @@ it's RECOMMENDED to: | `_OTHER` | A fallback error value to be used when the instrumentation doesn't define a custom value. | -[DocumentStatus]: https://github.com/open-telemetry/opentelemetry-specification/tree/v1.22.0/specification/document-status.md \ No newline at end of file +[DocumentStatus]: https://github.com/open-telemetry/opentelemetry-specification/tree/v1.22.0/specification/document-status.md diff --git a/docs/ai/openai.md b/docs/ai/openai.md index 4c7acf404a..8105be0f29 100644 --- a/docs/ai/openai.md +++ b/docs/ai/openai.md @@ -6,22 +6,22 @@ linkTitle: OpenAI **Status**: [Experimental][DocumentStatus] -This document outlines the Semantic Conventions specific to -[OpenAI](https://platform.openai.com/) spans, extending the general semantics -found in the [LLM Semantic Conventions](llm-spans.md). These conventions are -designed to standardize telemetry data for OpenAI interactions, particularly -focusing on the `/chat/completions` endpoint. By following to these guidelines, +This document outlines the Semantic Conventions specific to +[OpenAI](https://platform.openai.com/) spans, extending the general semantics +found in the [LLM Semantic Conventions](llm-spans.md). These conventions are +designed to standardize telemetry data for OpenAI interactions, particularly +focusing on the `/chat/completions` endpoint. By following to these guidelines, developers can ensure consistent, meaningful, and easily interpretable telemetry data across different applications and platforms. ## Chat Completions -The span name for OpenAI chat completions SHOULD be `openai.chat` +The span name for OpenAI chat completions SHOULD be `openai.chat` to maintain consistency and clarity in telemetry data. ## Request Attributes -These are the attributes when instrumenting OpenAI LLM requests with the +These are the attributes when instrumenting OpenAI LLM requests with the `/chat/completions` endpoint. @@ -67,7 +67,7 @@ Because OpenAI uses a more complex prompt structure, these events will be used i ### Prompt Events -Prompt event name SHOULD be `llm.openai.prompt`. +Prompt event name SHOULD be `llm.openai.prompt`. | Attribute | Type | Description | Examples | Requirement Level | @@ -87,15 +87,15 @@ Tools event name SHOULD be `llm.openai.tool`, specifying potential tools or func | `type` | string | They type of the tool. Currently, only `function` is supported. | `function` | Required | | `function.name` | string | The name of the function to be called. | `get_weather` | Required ! | `function.description` | string | A description of what the function does, used by the model to choose when and how to call the function. | `` | Required | -| `function.parameters` | string | JSON-encoded string of the parameter object for the function. | `{"type": "object", "properties": {}}` | Required | +| `function.parameters` | string | JSON-encoded string of the parameter object for the function. | `{"type": "object", "properties": {}}` | Required | ### Choice Events -Recording details about Choices in each response MAY be included as -Span Events. +Recording details about Choices in each response MAY be included as +Span Events. -Choice event name SHOULD be `llm.openai.choice`. +Choice event name SHOULD be `llm.openai.choice`. If there is more than one `tool_call`, separate events SHOULD be used. @@ -111,4 +111,4 @@ If there is more than one `tool_call`, separate events SHOULD be used. | `tool_call.function.arguments` | string | If exists, the arguments to call a function call with for a given OpenAI response, denoted by ``. The value for `` starts with 0, where 0 is the first message. | `{"type": "object", "properties": {"some":"data"}}` | Required | -[DocumentStatus]: https://github.com/open-telemetry/opentelemetry-specification/tree/v1.22.0/specification/document-status.md \ No newline at end of file +[DocumentStatus]: https://github.com/open-telemetry/opentelemetry-specification/tree/v1.22.0/specification/document-status.md From fd57c6cf6fb1fd05c02591e26d38654d84532a69 Mon Sep 17 00:00:00 2001 From: Drew Robbins Date: Mon, 29 Jan 2024 02:27:49 +0000 Subject: [PATCH 3/4] Fix yamllint errors --- model/metrics/llm-metrics.yaml | 6 ++--- model/registry/llm.yaml | 6 ++--- model/trace/llm.yaml | 40 +++++++++++++++++++++++----------- 3 files changed, 33 insertions(+), 19 deletions(-) diff --git a/model/metrics/llm-metrics.yaml b/model/metrics/llm-metrics.yaml index 2ca1ff3b41..75db1e31ff 100644 --- a/model/metrics/llm-metrics.yaml +++ b/model/metrics/llm-metrics.yaml @@ -102,8 +102,8 @@ groups: stability: experimental attributes: - ref: llm.response.model - requirement_level: required - - ref: error.type + requirement_level: conditionally_required: "if the operation ended in error" + - ref: error.type - ref: server.address - requirement_level: required \ No newline at end of file + requirement_level: required diff --git a/model/registry/llm.yaml b/model/registry/llm.yaml index 1f59626ef4..31bf953b94 100644 --- a/model/registry/llm.yaml +++ b/model/registry/llm.yaml @@ -56,7 +56,7 @@ groups: examples: ['stop'] tag: llm-generic-response - id: usage.token_type - type: + type: members: - id: prompt value: 'prompt' @@ -183,7 +183,7 @@ groups: tag: tech-specific-openai-events - id: openai.function.arguments type: string - brief: If exists, the arguments to call a function call with for a given OpenAI response, denoted by ``. The value for `` starts with 0, where 0 is the first message. + brief: If exists, the arguments to call a function call with for a given OpenAI response, denoted by ``. The value for `` starts with 0, where 0 is the first message. examples: '{"type": "object", "properties": {"some":"data"}}' tag: tech-specific-openai-events - id: openai.choice.type @@ -195,4 +195,4 @@ groups: value: 'message' brief: The type of the choice, either `delta` or `message`. examples: 'message' - tag: tech-specific-openai-events \ No newline at end of file + tag: tech-specific-openai-events diff --git a/model/trace/llm.yaml b/model/trace/llm.yaml index 17fe1e709f..1c844732b0 100644 --- a/model/trace/llm.yaml +++ b/model/trace/llm.yaml @@ -11,7 +11,9 @@ groups: - ref: llm.request.model requirement_level: required note: > - The name of the LLM a request is being made to. If the LLM is supplied by a vendor, then the value must be the exact name of the model requested. If the LLM is a fine-tuned custom model, the value should have a more specific name than the base model that's been fine-tuned. + The name of the LLM a request is being made to. If the LLM is supplied by a vendor, + then the value must be the exact name of the model requested. If the LLM is a fine-tuned + custom model, the value should have a more specific name than the base model that's been fine-tuned. - ref: llm.request.max_tokens requirement_level: recommended - ref: llm.request.temperature @@ -27,7 +29,9 @@ groups: - ref: llm.response.model requirement_level: required note: > - The name of the LLM a response is being made to. If the LLM is supplied by a vendor, then the value must be the exact name of the model actually used. If the LLM is a fine-tuned custom model, the value should have a more specific name than the base model that's been fine-tuned. + The name of the LLM a response is being made to. If the LLM is supplied by a vendor, + then the value must be the exact name of the model actually used. If the LLM is a + fine-tuned custom model, the value should have a more specific name than the base model that's been fine-tuned. - ref: llm.response.finish_reason requirement_level: recommended - ref: llm.usage.prompt_tokens @@ -44,13 +48,16 @@ groups: name: llm.content.prompt type: event brief: > - In the lifetime of an LLM span, events for prompts sent and completions received may be created, depending on the configuration of the instrumentation. + In the lifetime of an LLM span, events for prompts sent and completions received + may be created, depending on the configuration of the instrumentation. attributes: - ref: llm.prompt requirement_level: recommended note: > - The full prompt string sent to an LLM in a request. If the LLM accepts a more complex input like a JSON object, this field is blank, and the response is instead captured in an event determined by the specific LLM technology semantic convention. - + The full prompt string sent to an LLM in a request. If the LLM accepts a more + complex input like a JSON object, this field is blank, and the response is + instead captured in an event determined by the specific LLM technology semantic convention. + - id: llm.content.completion name: llm.content.completion type: event @@ -60,7 +67,11 @@ groups: - ref: llm.completion requirement_level: recommended note: > - The full response string from an LLM. If the LLM responds with a more complex output like a JSON object made up of several pieces (such as OpenAI's message choices), this field is the content of the response. If the LLM produces multiple responses, then this field is left blank, and each response is instead captured in an event determined by the specific LLM technology semantic convention. + The full response string from an LLM. If the LLM responds with a more + complex output like a JSON object made up of several pieces (such as OpenAI's message choices), + this field is the content of the response. If the LLM produces multiple responses, then this + field is left blank, and each response is instead captured in an event determined by the specific + LLM technology semantic convention. - id: llm.openai type: span @@ -74,7 +85,10 @@ groups: - ref: llm.request.model requirement_level: required note: > - The name of the LLM a request is being made to. If the LLM is supplied by a vendor, then the value must be the exact name of the model requested. If the LLM is a fine-tuned custom model, the value should have a more specific name than the base model that's been fine-tuned. + The name of the LLM a request is being made to. If the LLM is supplied by a + vendor, then the value must be the exact name of the model requested. If the + LLM is a fine-tuned custom model, the value should have a more specific name + than the base model that's been fine-tuned. tag: tech-specific-openai-request - ref: llm.request.max_tokens tag: tech-specific-openai-request @@ -126,7 +140,7 @@ groups: - ref: llm.openai.content requirement_level: required - ref: llm.openai.tool_call.id - requirement_level: + requirement_level: conditionally_required: > Required if the prompt role is `tool`. @@ -159,18 +173,18 @@ groups: - ref: llm.openai.content requirement_level: required - ref: llm.openai.tool_call.id - requirement_level: + requirement_level: conditionally_required: > Required if the choice is the result of a tool call. - ref: llm.openai.tool.type - requirement_level: + requirement_level: conditionally_required: > Required if the choice is the result of a tool call. - ref: llm.openai.function.name - requirement_level: + requirement_level: conditionally_required: > Required if the choice is the result of a tool call of type `function`. - ref: llm.openai.function.arguments - requirement_level: + requirement_level: conditionally_required: > - Required if the choice is the result of a tool call of type `function`. \ No newline at end of file + Required if the choice is the result of a tool call of type `function`. From c80b80c329faaf2b17cf74ec04bb4b5b1c5b5b07 Mon Sep 17 00:00:00 2001 From: Drew Robbins Date: Mon, 29 Jan 2024 05:04:35 +0000 Subject: [PATCH 4/4] Regenerate markdown based on yaml model --- docs/ai/llm-spans.md | 80 ++++++++++------------ docs/ai/openai-metrics.md | 2 +- docs/ai/openai.md | 113 +++++++++++++++++--------------- docs/attributes-registry/llm.md | 6 +- 4 files changed, 96 insertions(+), 105 deletions(-) diff --git a/docs/ai/llm-spans.md b/docs/ai/llm-spans.md index 12884056f9..894d464786 100644 --- a/docs/ai/llm-spans.md +++ b/docs/ai/llm-spans.md @@ -12,7 +12,6 @@ linkTitle: LLM Calls - [Configuration](#configuration) - [LLM Request attributes](#llm-request-attributes) -- [LLM Response attributes](#llm-response-attributes) - [Events](#events) @@ -36,65 +35,52 @@ By default, these configurations SHOULD NOT capture prompts and completions. These attributes track input data and metadata for a request to an LLM. Each attribute represents a concept that is common to most LLMs. - + | Attribute | Type | Description | Examples | Requirement Level | |---|---|---|---|---| -| `llm.vendor` | string | The name of the LLM foundation model vendor, if applicable. If not using a vendor-supplied model, this field is left blank. | `openai` | Recommended | -| `llm.request.model` | string | The name of the LLM a request is being made to. If the LLM is supplied by a vendor, then the value must be the exact name of the model requested. If the LLM is a fine-tuned custom model, the value SHOULD have a more specific name than the base model that's been fine-tuned. | `gpt-4` | Required | -| `llm.request.max_tokens` | int | The maximum number of tokens the LLM generates for a request. | `100` | Recommended | -| `llm.temperature` | float | The temperature setting for the LLM request. | `0.0` | Recommended | -| `llm.top_p` | float | The top_p sampling setting for the LLM request. | `1.0` | Recommended | -| `llm.stream` | bool | Whether the LLM responds with a stream. | `false` | Recommended | -| `llm.stop_sequences` | array | Array of strings the LLM uses as a stop sequence. | `["stop1"]` | Recommended | - -`llm.model` has the following list of well-known values. If one of them applies, then the respective value MUST be used, otherwise a custom value MAY be used. - -| Value | Description | -|---|---| -| `gpt-4` | GPT-4 | -| `gpt-4-32k` | GPT-4 with 32k context window | -| `gpt-3.5-turbo` | GPT-3.5-turbo | -| `gpt-3.5-turbo-16k` | GPT-3.5-turbo with 16k context window| -| `claude-instant-1` | Claude Instant (latest version) | -| `claude-2` | Claude 2 (latest version) | -| `other-llm` | Any LLM not listed in this table. Use for any fine-tuned version of a model. | +| [`llm.request.is_stream`](../attributes-registry/llm.md) | boolean | Whether the LLM responds with a stream. | `False` | Recommended | +| [`llm.request.max_tokens`](../attributes-registry/llm.md) | int | The maximum number of tokens the LLM generates for a request. | `100` | Recommended | +| [`llm.request.model`](../attributes-registry/llm.md) | string | The name of the LLM a request is being made to. [1] | `gpt-4` | Required | +| [`llm.request.stop_sequences`](../attributes-registry/llm.md) | string | Array of strings the LLM uses as a stop sequence. | `stop1` | Recommended | +| [`llm.request.temperature`](../attributes-registry/llm.md) | double | The temperature setting for the LLM request. | `0.0` | Recommended | +| [`llm.request.top_p`](../attributes-registry/llm.md) | double | The top_p sampling setting for the LLM request. | `1.0` | Recommended | +| [`llm.response.finish_reason`](../attributes-registry/llm.md) | string | The reason the model stopped generating tokens. | `stop` | Recommended | +| [`llm.response.id`](../attributes-registry/llm.md) | string[] | The unique identifier for the completion. | `[chatcmpl-123]` | Recommended | +| [`llm.response.model`](../attributes-registry/llm.md) | string | The name of the LLM a response is being made to. [2] | `gpt-4-0613` | Required | +| [`llm.system`](../attributes-registry/llm.md) | string | The name of the LLM foundation model vendor, if applicable. [3] | `openai` | Recommended | +| [`llm.usage.completion_tokens`](../attributes-registry/llm.md) | int | The number of tokens used in the LLM response (completion). | `180` | Recommended | +| [`llm.usage.prompt_tokens`](../attributes-registry/llm.md) | int | The number of tokens used in the LLM prompt. | `100` | Recommended | +| [`llm.usage.total_tokens`](../attributes-registry/llm.md) | int | The total number of tokens used in the LLM prompt and response. | `280` | Recommended | + +**[1]:** The name of the LLM a request is being made to. If the LLM is supplied by a vendor, then the value must be the exact name of the model requested. If the LLM is a fine-tuned custom model, the value should have a more specific name than the base model that's been fine-tuned. + +**[2]:** The name of the LLM a response is being made to. If the LLM is supplied by a vendor, then the value must be the exact name of the model actually used. If the LLM is a fine-tuned custom model, the value should have a more specific name than the base model that's been fine-tuned. + +**[3]:** The name of the LLM foundation model vendor, if applicable. If not using a vendor-supplied model, this field is left blank. -## LLM Response attributes +## Events + +In the lifetime of an LLM span, an event for prompts sent and completions received MAY be created, depending on the configuration of the instrumentation. -These attributes track output data and metadata for a response from an LLM. Each attribute represents a concept that is common to most LLMs. + +The event name MUST be `llm.content.prompt`. - | Attribute | Type | Description | Examples | Requirement Level | |---|---|---|---|---| -| `llm.response.id` | string | The unique identifier for the completion. | `chatcmpl-123` | Recommended | -| `llm.response.model` | string | The name of the LLM a response is being made to. If the LLM is supplied by a vendor, then the value must be the exact name of the model actually used. If the LLM is a fine-tuned custom model, the value SHOULD have a more specific name than the base model that's been fine-tuned. | `gpt-4-0613` | Required | -| `llm.response.finish_reason` | string | The reason the model stopped generating tokens | `stop` | Recommended | -| `llm.usage.prompt_tokens` | int | The number of tokens used in the LLM prompt. | `100` | Recommended | -| `llm.usage.completion_tokens` | int | The number of tokens used in the LLM response (completion). | `180` | Recommended | -| `llm.usage.total_tokens` | int | The total number of tokens used in the LLM prompt and response. | `280` | Recommended | - -`llm.response.finish_reason` MUST be one of the following: - -| Value | Description | -|---|---| -| `stop` | If the model hit a natural stop point or a provided stop sequence. | -| `max_tokens` | If the maximum number of tokens specified in the request was reached. | -| `tool_call` | If a function / tool call was made by the model (for models that support such functionality). | - +| [`llm.prompt`](../attributes-registry/llm.md) | string | The full prompt string sent to an LLM in a request. [1] | `\\n\\nHuman:You are an AI assistant that tells jokes. Can you tell me a joke about OpenTelemetry?\\n\\nAssistant:` | Recommended | -## Events +**[1]:** The full prompt string sent to an LLM in a request. If the LLM accepts a more complex input like a JSON object, this field is blank, and the response is instead captured in an event determined by the specific LLM technology semantic convention. + -In the lifetime of an LLM span, an event for prompts sent and completions received MAY be created, depending on the configuration of the instrumentation. + +The event name MUST be `llm.content.completion`. - | Attribute | Type | Description | Examples | Requirement Level | -| `llm.prompt` | string | The full prompt string sent to an LLM in a request. If the LLM accepts a more complex input like a JSON object made up of several pieces (such as OpenAI's different message types), this field is blank, and the response is instead captured in an event determined by the specific LLM technology semantic convention. | `\n\nHuman:You are an AI assistant that tells jokes. Can you tell me a joke about OpenTelemetry?\n\nAssistant:` | Recommended | - +|---|---|---|---|---| +| [`llm.completion`](../attributes-registry/llm.md) | string | The full response string from an LLM in a response. [1] | `Why did the developer stop using OpenTelemetry? Because they couldnt trace their steps!` | Recommended | - -| Attribute | Type | Description | Examples | Requirement Level | -| `llm.completion` | string | The full response string from an LLM. If the LLM responds with a more complex output like a JSON object made up of several pieces (such as OpenAI's message choices), this field is the content of the response. If the LLM produces multiple responses, then this field is left blank, and each response is instead captured in an event determined by the specific LLM technology semantic convention.| `Why did the developer stop using OpenTelemetry? Because they couldn't trace their steps!` | Recommended | +**[1]:** The full response string from an LLM. If the LLM responds with a more complex output like a JSON object made up of several pieces (such as OpenAI's message choices), this field is the content of the response. If the LLM produces multiple responses, then this field is left blank, and each response is instead captured in an event determined by the specific LLM technology semantic convention. [DocumentStatus]: https://github.com/open-telemetry/opentelemetry-specification/tree/v1.22.0/specification/document-status.md diff --git a/docs/ai/openai-metrics.md b/docs/ai/openai-metrics.md index 656148318a..bf39882d9e 100644 --- a/docs/ai/openai-metrics.md +++ b/docs/ai/openai-metrics.md @@ -341,7 +341,7 @@ of `[ 0, 0.005, 0.01, 0.025, 0.05, 0.075, 0.1, 0.25, 0.5, 0.75, 1, 2.5, 5, 7.5, | Attribute | Type | Description | Examples | Requirement Level | |---|---|---|---|---| | [`error.type`](../attributes-registry/error.md) | string | Describes a class of error the operation ended with. [1] | `timeout`; `java.net.UnknownHostException`; `server_certificate_invalid`; `500` | Recommended | -| [`llm.response.model`](../attributes-registry/llm.md) | string | The name of the LLM a response is being made to. | `gpt-4-0613` | Required | +| [`llm.response.model`](../attributes-registry/llm.md) | string | The name of the LLM a response is being made to. | `gpt-4-0613` | Conditionally Required: if the operation ended in error | | [`server.address`](../attributes-registry/server.md) | string | Server domain name if available without reverse DNS lookup; otherwise, IP address or Unix domain socket name. [2] | `example.com`; `10.1.2.80`; `/tmp/my.sock` | Required | **[1]:** The `error.type` SHOULD be predictable and SHOULD have low cardinality. diff --git a/docs/ai/openai.md b/docs/ai/openai.md index 8105be0f29..001d751ce5 100644 --- a/docs/ai/openai.md +++ b/docs/ai/openai.md @@ -24,40 +24,30 @@ to maintain consistency and clarity in telemetry data. These are the attributes when instrumenting OpenAI LLM requests with the `/chat/completions` endpoint. - + | Attribute | Type | Description | Examples | Requirement Level | |---|---|---|---|---| -| `llm.vendor` | string | The name of the LLM foundation model vendor, if applicable. If not using a vendor-supplied model, this field is left blank. | `openai` | Recommended | -| `llm.request.model` | string | The name of the LLM a request is being made to. If the LLM is supplied by a vendor, then the value must be the exact name of the model requested. If the LLM is a fine-tuned custom model, the value SHOULD have a more specific name than the base model that's been fine-tuned. | `gpt-4` | Required | -| `llm.request.max_tokens` | int | The maximum number of tokens the LLM generates for a request. | `100` | Recommended | -| `llm.temperature` | float | The temperature setting for the LLM request. | `0.0` | Recommended | -| `llm.top_p` | float | The top_p sampling setting for the LLM request. | `1.0` | Recommended | -| `llm.stream` | bool | Whether the LLM responds with a stream. | `false` | Recommended | -| `llm.stop_sequences` | array | Array of strings the LLM uses as a stop sequence. | `["stop1"]` | Recommended | -| `llm.openai.n` | integer | The number of completions to generate. | `1` | Recommended | -| `llm.openai.presence_penalty` | float | If present, the `presence_penalty` used in an OpenAI request. Value is between -2.0 and 2.0. | `-0.5` | Recommended | -| `llm.openai.frequency_penalty` | float | If present, the `frequency_penalty` used in an OpenAI request. Value is between -2.0 and 2.0. | `-0.5` | Recommended | -| `llm.openai.logit_bias` | string | If present, the JSON-encoded string of a `logit_bias` used in an OpenAI request. | `{2435:-100, 640:-100}` | Recommended | -| `llm.openai.user` | string | If present, the `user` used in an OpenAI request. | `bob` | Opt-in | -| `llm.openai.response_format` | string | An object specifying the format that the model must output. Either `text` or `json_object` | `text` | Recommended | -| `llm.openai.seed` | integer | Seed used in request to improve determinism. | `1234` | Recommended | - - -## Response attributes - -Attributes for chat completion responses SHOULD follow these conventions: - - -| Attribute | Type | Description | Examples | Requirement Level | -|---|---|---|---|---| -| `llm.response.id` | string | The unique identifier for the completion. | `chatcmpl-123` | Recommended | -| `llm.response.model` | string | The name of the LLM a response is being made to. If the LLM is supplied by a vendor, then the value must be the exact name of the model actually used. If the LLM is a fine-tuned custom model, the value SHOULD have a more specific name than the base model that's been fine-tuned. | `gpt-4-0613` | Required | -| `llm.response.finish_reason` | string | The reason the model stopped generating tokens | `stop` | Recommended | -| `llm.usage.prompt_tokens` | int | The number of tokens used in the LLM prompt. | `100` | Recommended | -| `llm.usage.completion_tokens` | int | The number of tokens used in the LLM response (completion). | `180` | Recommended | -| `llm.usage.total_tokens` | int | The total number of tokens used in the LLM prompt and response. | `280` | Recommended | -| `llm.openai.created` | int | The UNIX timestamp (in seconds) if when the completion was created. | `1677652288` | Recommended | -| `llm.openai.system_fingerprint` | string | This fingerprint represents the backend configuration that the model runs with. | asdf987123 | Recommended | +| [`llm.request.is_stream`](../attributes-registry/llm.md) | boolean | Whether the LLM responds with a stream. | `False` | Recommended | +| [`llm.request.max_tokens`](../attributes-registry/llm.md) | int | The maximum number of tokens the LLM generates for a request. | `100` | Recommended | +| [`llm.request.model`](../attributes-registry/llm.md) | string | The name of the LLM a request is being made to. [1] | `gpt-4` | Required | +| [`llm.request.openai.logit_bias`](../attributes-registry/llm.md) | string | If present, the JSON-encoded string of a `logit_bias` used in an OpenAI request | `{2435:-100, 640:-100}` | Recommended | +| [`llm.request.openai.presence_penalty`](../attributes-registry/llm.md) | double | If present, the `presence_penalty` used in an OpenAI request. Value is between -2.0 and 2.0. | `-0.5` | Recommended | +| [`llm.request.openai.response_format`](../attributes-registry/llm.md) | string | An object specifying the format that the model must output. Either `text` or `json_object` | `text` | Recommended | +| [`llm.request.openai.seed`](../attributes-registry/llm.md) | int | Seed used in request to improve determinism. | `1234` | Recommended | +| [`llm.request.openai.user`](../attributes-registry/llm.md) | string | If present, the `user` used in an OpenAI request. | `bob` | Recommended | +| [`llm.request.stop_sequences`](../attributes-registry/llm.md) | string | Array of strings the LLM uses as a stop sequence. | `stop1` | Recommended | +| [`llm.request.temperature`](../attributes-registry/llm.md) | double | The temperature setting for the LLM request. | `0.0` | Recommended | +| [`llm.request.top_p`](../attributes-registry/llm.md) | double | The top_p sampling setting for the LLM request. | `1.0` | Recommended | +| [`llm.response.finish_reason`](../attributes-registry/llm.md) | string | The reason the model stopped generating tokens. | `stop` | Recommended | +| [`llm.response.id`](../attributes-registry/llm.md) | string[] | The unique identifier for the completion. | `[chatcmpl-123]` | Recommended | +| [`llm.response.openai.created`](../attributes-registry/llm.md) | int | The UNIX timestamp (in seconds) if when the completion was created. | `1677652288` | Recommended | +| [`llm.response.openai.system_fingerprint`](../attributes-registry/llm.md) | string | This fingerprint represents the backend configuration that the model runs with. | `asdf987123` | Recommended | +| [`llm.system`](../attributes-registry/llm.md) | string | The name of the LLM foundation model vendor, if applicable. | `openai`; `microsoft` | Recommended | +| [`llm.usage.completion_tokens`](../attributes-registry/llm.md) | int | The number of tokens used in the LLM response (completion). | `180` | Recommended | +| [`llm.usage.prompt_tokens`](../attributes-registry/llm.md) | int | The number of tokens used in the LLM prompt. | `100` | Recommended | +| [`llm.usage.total_tokens`](../attributes-registry/llm.md) | int | The total number of tokens used in the LLM prompt and response. | `280` | Recommended | + +**[1]:** The name of the LLM a request is being made to. If the LLM is supplied by a vendor, then the value must be the exact name of the model requested. If the LLM is a fine-tuned custom model, the value should have a more specific name than the base model that's been fine-tuned. ## Request Events @@ -67,27 +57,31 @@ Because OpenAI uses a more complex prompt structure, these events will be used i ### Prompt Events -Prompt event name SHOULD be `llm.openai.prompt`. +Prompt event name SHOULD be `llm.content.openai.prompt`. + + +The event name MUST be `llm.content.openai.prompt`. - | Attribute | Type | Description | Examples | Requirement Level | |---|---|---|---|---| -| `role` | string | The role of the prompt author, can be one of `system`, `user`, `assistant`, or `tool` | `system` | Required | -| `content` | string | The content for a given OpenAI response, denoted by ``. The value for `` starts with 0, where 0 is the first message. | `Why did the developer stop using OpenTelemetry? Because they couldn't trace their steps!` | Required | -| `tool_call_id` | string | If role is `tool` or `function`, then this tool call that this message is responding to. | `get_current_weather` | Conditionally Required: If `role` is `tool`. | +| [`llm.openai.content`](../attributes-registry/llm.md) | string | The content for a given OpenAI response. | `Why did the developer stop using OpenTelemetry? Because they couldn't trace their steps!` | Required | +| [`llm.openai.role`](../attributes-registry/llm.md) | string | The role of the prompt author, can be one of `system`, `user`, `assistant`, or `tool` | `user` | Required | +| [`llm.openai.tool_call.id`](../attributes-registry/llm.md) | string | If role is `tool` or `function`, then this tool call that this message is responding to. | `get_current_weather` | Conditionally Required: Required if the prompt role is `tool`. | ### Tools Events -Tools event name SHOULD be `llm.openai.tool`, specifying potential tools or functions the LLM can use. +Tools event name SHOULD be `llm.content.openai.tool`, specifying potential tools or functions the LLM can use. + + +The event name MUST be `llm.content.openai.tool`. - | Attribute | Type | Description | Examples | Requirement Level | |---|---|---|---|---| -| `type` | string | They type of the tool. Currently, only `function` is supported. | `function` | Required | -| `function.name` | string | The name of the function to be called. | `get_weather` | Required ! -| `function.description` | string | A description of what the function does, used by the model to choose when and how to call the function. | `` | Required | -| `function.parameters` | string | JSON-encoded string of the parameter object for the function. | `{"type": "object", "properties": {}}` | Required | +| [`llm.openai.function.description`](../attributes-registry/llm.md) | string | A description of what the function does, used by the model to choose when and how to call the function. | `Gets the current weather for a location` | Required | +| [`llm.openai.function.name`](../attributes-registry/llm.md) | string | The name of the function to be called. | `get_weather` | Required | +| [`llm.openai.function.parameters`](../attributes-registry/llm.md) | string | JSON-encoded string of the parameter object for the function. | `{"type": "object", "properties": {}}` | Required | +| [`llm.openai.tool.type`](../attributes-registry/llm.md) | string | The type of the tool. Currently, only `function` is supported. | `function` | Required | ### Choice Events @@ -95,20 +89,31 @@ Tools event name SHOULD be `llm.openai.tool`, specifying potential tools or func Recording details about Choices in each response MAY be included as Span Events. -Choice event name SHOULD be `llm.openai.choice`. +Choice event name SHOULD be `llm.content.openai.choice`. + +If there is more than one `choice`, separate events SHOULD be used. -If there is more than one `tool_call`, separate events SHOULD be used. + +The event name MUST be `llm.content.openai.completion.choice`. - -| `type` | string | Either `delta` or `message`. | `message` | Required | +| Attribute | Type | Description | Examples | Requirement Level | |---|---|---|---|---| -| `finish_reason` | string | The reason the OpenAI model stopped generating tokens for this chunk. | `stop` | Recommended | -| `role` | string | The assigned role for a given OpenAI response, denoted by ``. The value for `` starts with 0, where 0 is the first message. | `system` | Required | -| `content` | string | The content for a given OpenAI response, denoted by ``. The value for `` starts with 0, where 0 is the first message. | `Why did the developer stop using OpenTelemetry? Because they couldn't trace their steps!` | Required | -| `tool_call.id` | string | If exists, the ID of the tool call. | `call_BP08xxEhU60txNjnz3z9R4h9` | Required | -| `tool_call.type` | string | Currently only `function` is supported. | `function` | Required | -| `tool_call.function.name` | string | If exists, the name of a function call for a given OpenAI response, denoted by ``. The value for `` starts with 0, where 0 is the first message. | `get_weather_report` | Required | -| `tool_call.function.arguments` | string | If exists, the arguments to call a function call with for a given OpenAI response, denoted by ``. The value for `` starts with 0, where 0 is the first message. | `{"type": "object", "properties": {"some":"data"}}` | Required | +| [`llm.openai.choice.type`](../attributes-registry/llm.md) | string | The type of the choice, either `delta` or `message`. | `message` | Required | +| [`llm.openai.content`](../attributes-registry/llm.md) | string | The content for a given OpenAI response. | `Why did the developer stop using OpenTelemetry? Because they couldn't trace their steps!` | Required | +| [`llm.openai.function.arguments`](../attributes-registry/llm.md) | string | If exists, the arguments to call a function call with for a given OpenAI response, denoted by ``. The value for `` starts with 0, where 0 is the first message. | `{"type": "object", "properties": {"some":"data"}}` | Conditionally Required: [1] | +| [`llm.openai.function.name`](../attributes-registry/llm.md) | string | The name of the function to be called. | `get_weather` | Conditionally Required: [2] | +| [`llm.openai.role`](../attributes-registry/llm.md) | string | The role of the prompt author, can be one of `system`, `user`, `assistant`, or `tool` | `user` | Required | +| [`llm.openai.tool.type`](../attributes-registry/llm.md) | string | The type of the tool. Currently, only `function` is supported. | `function` | Conditionally Required: [3] | +| [`llm.openai.tool_call.id`](../attributes-registry/llm.md) | string | If role is `tool` or `function`, then this tool call that this message is responding to. | `get_current_weather` | Conditionally Required: [4] | +| [`llm.response.finish_reason`](../attributes-registry/llm.md) | string | The reason the model stopped generating tokens. | `stop` | Recommended | + +**[1]:** Required if the choice is the result of a tool call of type `function`. + +**[2]:** Required if the choice is the result of a tool call of type `function`. + +**[3]:** Required if the choice is the result of a tool call. + +**[4]:** Required if the choice is the result of a tool call. [DocumentStatus]: https://github.com/open-telemetry/opentelemetry-specification/tree/v1.22.0/specification/document-status.md diff --git a/docs/attributes-registry/llm.md b/docs/attributes-registry/llm.md index 5dfb91d272..e9a66e9021 100644 --- a/docs/attributes-registry/llm.md +++ b/docs/attributes-registry/llm.md @@ -23,13 +23,13 @@ | Attribute | Type | Description | Examples | |---|---|---|---| +| `llm.request.is_stream` | boolean | Whether the LLM responds with a stream. | `False` | | `llm.request.max_tokens` | int | The maximum number of tokens the LLM generates for a request. | `100` | | `llm.request.model` | string | The name of the LLM a request is being made to. | `gpt-4` | | `llm.request.stop_sequences` | string | Array of strings the LLM uses as a stop sequence. | `stop1` | -| `llm.request.stream` | boolean | Whether the LLM responds with a stream. | `False` | | `llm.request.temperature` | double | The temperature setting for the LLM request. | `0.0` | | `llm.request.top_p` | double | The top_p sampling setting for the LLM request. | `1.0` | -| `llm.request.vendor` | string | The name of the LLM foundation model vendor, if applicable. | `openai` | +| `llm.system` | string | The name of the LLM foundation model vendor, if applicable. | `openai` | ### Response Attributes @@ -38,7 +38,7 @@ | Attribute | Type | Description | Examples | |---|---|---|---| | `llm.response.finish_reason` | string | The reason the model stopped generating tokens. | `stop` | -| `llm.response.id` | string | The unique identifier for the completion. | `chatcmpl-123` | +| `llm.response.id` | string[] | The unique identifier for the completion. | `[chatcmpl-123]` | | `llm.response.model` | string | The name of the LLM a response is being made to. | `gpt-4-0613` | | `llm.usage.completion_tokens` | int | The number of tokens used in the LLM response (completion). | `180` | | `llm.usage.prompt_tokens` | int | The number of tokens used in the LLM prompt. | `100` |