From 1d5287c04e630b52a65524810da817d1c5306b15 Mon Sep 17 00:00:00 2001 From: ericmustin Date: Wed, 20 Jan 2021 11:21:03 -0500 Subject: [PATCH 01/13] add opentelemtry env specific setup --- .../setup_overview/open_standards/_index.md | 367 +++++++++++++++++- 1 file changed, 366 insertions(+), 1 deletion(-) diff --git a/content/en/tracing/setup_overview/open_standards/_index.md b/content/en/tracing/setup_overview/open_standards/_index.md index 97173c5443010..d753b3d7e0f19 100644 --- a/content/en/tracing/setup_overview/open_standards/_index.md +++ b/content/en/tracing/setup_overview/open_standards/_index.md @@ -51,7 +51,7 @@ On each OpenTelemetry-instrumented application, set the resource attributes `dev ### Ingesting OpenTelemetry Traces with the Collector -The OpenTelemetry Collector is configured by adding a [pipeline][8] to your `otel-collector-configuration.yml` file. Supply the relative path to this configuration file when you start the collector by passing it in via the `--config=` command line argument. For examples of supplying a configuration file, see the [OpenTelemetry Collector documentation][9]. +The OpenTelemetry Collector is configured by adding a [pipeline][8] to your `otel-collector-configuration.yml` file. Supply the relative path to this configuration file when you start the collector by passing it in via the `--config=` command line argument. For examples of supplying a configuration file, see the [environment specific setup](#environent-specific-setup) section below or the [OpenTelemetry Collector documentation][9]. The exporter assumes you have a pipeline that uses the `datadog` exporter, and includes a [batch processor][10] configured with the following: - A required `timeout` setting of `10s` (10 seconds). A batch representing 10 seconds of traces is a constraint of Datadog's API Intake for Trace Related Statistics. @@ -93,6 +93,371 @@ service: exporters: [datadog/api] ``` +### Environment Specific Setup + +#### Host: + +- Download the appropriate binary from [the project repository latest release](https://github.com/open-telemetry/opentelemetry-collector-contrib/releases/latest). + +- Create a `otel_collector_config.yaml` file. Here is an example template to get started. It enables the collector's otlp receiver and datadog exporter. + +- Run on the host with the configration yaml file set via the `--config` parameter. For example, + + ``` + curl -L https://github.com/open-telemetry/opentelemetry-collector-contrib/releases/latest/download/otelcontribcol_linux_amd64 | otelcontribcol_linux_amd64 --config otel_collector_config.yaml + ``` + +#### Docker + +Run an Opentelemetry Collector container to receive traces either from the [installed host](#receive-traces-from-host), or from [other containers](#receive-traces-from-other-containers). + +##### Receive Traces From Host + +- Create a `otel_collector_config.yaml` file. [Here is an example template](https://docs.datadoghq.com/tracing/setup_overview/open_standards/#ingesting-opentelemetry-traces-with-the-collector) to get started. It enables the collector's otlp receiver and datadog exporter. + +- Use a published docker image such as [`otel/opentelemetry-collector-contrib:latest`](https://hub.docker.com/r/otel/opentelemetry-collector-contrib/tags) + +- OpenTelemetry Traces are sent to the OpenTelemetry Collector over TCP or UDP on a number of ports, which must be exposed on the container. By default traces are sent over `OTLP/gRPC on port 55680`, but common protocols and their ports include: + + ``` + Zipkin/HTTP on port 9411 + Jaeger/gRPC on port 14250 + Jaeger/HTTP on port 14268 + Jaeger/Compact on port 6831 (UDP) + OTLP/gRPC on port 55680 + OTLP/HTTP on port 55681 + ``` + +- Run the container with the configured ports and an `otel_collector_config.yaml` file. For example: + + ``` + $ docker run \ + -p 55680:55680 \ + -v $(pwd)/otel_collector_config.yaml:/etc/otel/config.yaml \ + otel/opentelemetry-collector-contrib:latest + ``` + +- Configure your application with the appropriate Resource attributes for unified service tagging by adding the metadata [described here](https://docs.datadoghq.com/tracing/setup_overview/open_standards/#opentelemetry-collector-datadog-exporter) + +##### Receive Traces From Other Containers + +- Create a `otel_collector_config.yaml` file. [Here is an example template](https://docs.datadoghq.com/tracing/setup_overview/open_standards/#ingesting-opentelemetry-traces-with-the-collector) to get started. It enables the collector's otlp receiver and datadog exporter. + + +- Configure your application with the appropriate Resource attributes for unified service tagging by adding the metadata [described here](https://docs.datadoghq.com/tracing/setup_overview/open_standards/#opentelemetry-collector-datadog-exporter) + +- Create a docker network + + `docker network create ` + +- Run the OpenTelemetry Collector container and Application container in the same network. *Note*: When running the application container, ensure the environment variable `OTEL_EXPORTER_OTLP_SPAN_ENDPOINT` is configured to use the appropriate hostname for the Collector. In the example below this would be `opentelemetry-collector` + + ``` + # Datadog Agent + docker run -d --name opentelemetry-collector \ + --network \ + -v /var/run/docker.sock:/var/run/docker.sock:ro \ + -v /proc/:/host/proc/:ro \ + -v /sys/fs/cgroup/:/host/sys/fs/cgroup:ro \ + -v $(pwd)/otel_collector_config.yaml:/etc/otel/config.yaml \ + otel/opentelemetry-collector-contrib:latest + + # Application + docker run -d --name app \ + --network \ + -e OTEL_EXPORTER_OTLP_SPAN_ENDPOINT=http://opentelemetry-collector:55680 \ + company/app:latest + ``` + +#### Kubernetes + +The OpenTelemetry Collector can be run in two types of [deployment scenarios](https://opentelemetry.io/docs/collector/getting-started/#deployment). First, as an OpenTelemetry Collector "agent" running on the same host as the application in a sidecar or daemonset. Second, as a standalone service, e.g. a container or deployment, typically per cluster, datacenter or region. + +In order to accurately track the appropriate metadata in Datadog for information and billing purposes, it is recommended the OpenTelemetry Collector be run at least in agent mode on each of the Kubernetes Nodes. + +- When deploying the OpenTelemetry Collector as a Daemonset, refer to [the example configuration below](#opentelemetry-kubernetes-example-collector-configuration) as a guide. + +- On the application container, use the downward API to pull the host IP; the application container needs an environment variable that points to status.hostIP. The OpenTelemetry Collector container Agent expects this to be named `OTEL_EXPORTER_OTLP_SPAN_ENDPOINT`. Use the [below example snippet](#opentelemetry-kubernetes-example-application-configuration) as a guide. + +##### OpenTelemetry Kubernetes Example Collector Configuration + +``` +--- +# Give admin rights to the default account +# so that k8s_tagger can fetch info +# RBAC Config Here +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: otel-agent-conf + labels: + app: opentelemetry + component: otel-agent-conf +data: + otel-agent-config: | + receivers: + hostmetrics: + collection_interval: 10s + scrapers: + load: + otlp: + protocols: + grpc: + http: + jaeger: + protocols: + grpc: + thrift_compact: + thrift_http: + zipkin: + exporters: + otlp: + endpoint: "otel-collector.default:55680" + insecure: true + processors: + batch: + memory_limiter: + # Same as --mem-ballast-size-mib CLI argument + ballast_size_mib: 165 + # 80% of maximum memory up to 2G + limit_mib: 400 + # 25% of limit up to 2G + spike_limit_mib: 100 + check_interval: 5s + + # The resource detector injects the pod IP + # to every metric so that the k8s_tagger can + # fetch information afterwards. + resourcedetection: + detectors: [env] + timeout: 5s + override: false + # The k8s_tagger in the Agent is in passthrough mode + # so that it only tags with the minimal info for the + # collector k8s_tagger to complete + k8s_tagger: + passthrough: true + extensions: + health_check: {} + service: + extensions: [health_check] + pipelines: + metrics: + receivers: [otlp] + # resourcedetection must come before k8s_tagger + processors: [batch, resourcedetection, k8s_tagger] + exporters: [otlp] + traces: + receivers: [otlp, jaeger, zipkin] + # resourcedetection must come before k8s_tagger + processors: [memory_limiter, resourcedetection, k8s_tagger, batch] + exporters: [otlp] +--- +apiVersion: apps/v1 +kind: DaemonSet +metadata: + name: otel-agent + labels: + app: opentelemetry + component: otel-agent +spec: + selector: + matchLabels: + app: opentelemetry + component: otel-agent + template: + metadata: + labels: + app: opentelemetry + component: otel-agent + spec: + containers: + - command: + - "/otelcontribcol" + - "--config=/conf/otel-agent-config.yaml" + # Memory Ballast size should be max 1/3 to 1/2 of memory. + - "--mem-ballast-size-mib=165" + image: otel/opentelemetry-collector-contrib:latest + name: otel-agent + resources: + limits: + cpu: 500m + memory: 500Mi + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6831 # Jaeger Thrift Compact + protocol: UDP + - containerPort: 8888 # Prometheus Metrics + - containerPort: 9411 # Default endpoint for Zipkin receiver. + - containerPort: 14250 # Default endpoint for Jaeger gRPC receiver. + - containerPort: 14268 # Default endpoint for Jaeger HTTP receiver. + - containerPort: 55680 # Default OpenTelemetry gRPC receiver port. + - containerPort: 55681 # Default OpenTelemetry HTTP receiver port. + env: + # Get pod ip so that k8s_tagger can tag resources + - name: POD_IP + valueFrom: + fieldRef: + fieldPath: status.podIP + # This is picked up by the resource detector + - name: OTEL_RESOURCE + value: "k8s.pod.ip=$(POD_IP)" + volumeMounts: + - name: otel-agent-config-vol + mountPath: /conf + livenessProbe: + httpGet: + path: / + port: 13133 # Health Check extension default port. + readinessProbe: + httpGet: + path: / + port: 13133 # Health Check extension default port. + volumes: + - configMap: + name: otel-agent-conf + items: + - key: otel-agent-config + path: otel-agent-config.yaml + name: otel-agent-config-vol +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: otel-collector-conf + labels: + app: opentelemetry + component: otel-collector-conf +data: + otel-collector-config: | + receivers: + otlp: + protocols: + grpc: + http: + processors: + batch: + k8s_tagger: + extensions: + health_check: {} + zpages: {} + exporters: + datadog: + api: + key: + service: + extensions: [health_check, zpages] + pipelines: + metrics/2: + receivers: [otlp] + processors: [batch, k8s_tagger] + exporters: [datadog] + traces/2: + receivers: [otlp] + processors: [batch, k8s_tagger] + exporters: [datadog] +--- +apiVersion: v1 +kind: Service +metadata: + name: otel-collector + labels: + app: opentelemetry + component: otel-collector +spec: + ports: + - name: otlp # Default endpoint for OpenTelemetry receiver. + port: 55680 + protocol: TCP + targetPort: 55680 + - name: metrics # Default endpoint for querying metrics. + port: 8888 + selector: + component: otel-collector +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: otel-collector + labels: + app: opentelemetry + component: otel-collector +spec: + selector: + matchLabels: + app: opentelemetry + component: otel-collector + minReadySeconds: 5 + progressDeadlineSeconds: 120 + replicas: 1 + template: + metadata: + labels: + app: opentelemetry + component: otel-collector + spec: + containers: + - command: + - "/otelcontribcol" + - "--config=/conf/otel-collectorcollector-config.yaml" + - "--log-level=debug" + image: otel/opentelemetry-collector-contrib:latest + name: otel-collector + resources: + limits: + cpu: 1 + memory: 2Gi + requests: + cpu: 200m + memory: 400Mi + ports: + - containerPort: 55679 # Default endpoint for ZPages. + - containerPort: 55680 # Default endpoint for OpenTelemetry receiver. + - containerPort: 8888 # Default endpoint for querying metrics. + volumeMounts: + - name: otel-collector-config-vol + mountPath: /conf + livenessProbe: + httpGet: + path: / + port: 13133 # Health Check extension default port. + readinessProbe: + httpGet: + path: / + port: 13133 # Health Check extension default port. + volumes: + - configMap: + name: otel-collector-conf + items: + - key: otel-collector-config + path: otel-collector-config.yaml + name: otel-collector-config-vol +``` + +##### Opentelemetry Kubernetes Example Application Configuration + +``` +apiVersion: apps/v1 +kind: Deployment +... +spec: + containers: + - name: + image: / + env: + - name: HOST_IP + valueFrom: + fieldRef: + fieldPath: status.hostIP + # This is picked up by the opentelemetry sdks + - name: OTEL_EXPORTER_OTLP_SPAN_ENDPOINT + value: "http://$(HOST_IP):55680" +``` + + To see more information and additional examples of how you might configure your collector, see [the OpenTelemetry Collector configuration documentation][5]. ## Further Reading From 7f4f811151afe658985b1b3a98359fc4f2af9485 Mon Sep 17 00:00:00 2001 From: ericmustin Date: Wed, 20 Jan 2021 15:22:42 -0500 Subject: [PATCH 02/13] update env var for hostname config --- .../en/tracing/setup_overview/open_standards/_index.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/content/en/tracing/setup_overview/open_standards/_index.md b/content/en/tracing/setup_overview/open_standards/_index.md index d753b3d7e0f19..66494a0920d71 100644 --- a/content/en/tracing/setup_overview/open_standards/_index.md +++ b/content/en/tracing/setup_overview/open_standards/_index.md @@ -150,7 +150,7 @@ Run an Opentelemetry Collector container to receive traces either from the [inst `docker network create ` -- Run the OpenTelemetry Collector container and Application container in the same network. *Note*: When running the application container, ensure the environment variable `OTEL_EXPORTER_OTLP_SPAN_ENDPOINT` is configured to use the appropriate hostname for the Collector. In the example below this would be `opentelemetry-collector` +- Run the OpenTelemetry Collector container and Application container in the same network. *Note*: When running the application container, ensure the environment variable `OTEL_EXPORTER_OTLP_ENDPOINT` is configured to use the appropriate hostname for the Collector. In the example below this would be `opentelemetry-collector` ``` # Datadog Agent @@ -165,7 +165,7 @@ Run an Opentelemetry Collector container to receive traces either from the [inst # Application docker run -d --name app \ --network \ - -e OTEL_EXPORTER_OTLP_SPAN_ENDPOINT=http://opentelemetry-collector:55680 \ + -e OTEL_EXPORTER_OTLP_ENDPOINT=http://opentelemetry-collector:55680 \ company/app:latest ``` @@ -177,7 +177,7 @@ In order to accurately track the appropriate metadata in Datadog for information - When deploying the OpenTelemetry Collector as a Daemonset, refer to [the example configuration below](#opentelemetry-kubernetes-example-collector-configuration) as a guide. -- On the application container, use the downward API to pull the host IP; the application container needs an environment variable that points to status.hostIP. The OpenTelemetry Collector container Agent expects this to be named `OTEL_EXPORTER_OTLP_SPAN_ENDPOINT`. Use the [below example snippet](#opentelemetry-kubernetes-example-application-configuration) as a guide. +- On the application container, use the downward API to pull the host IP; the application container needs an environment variable that points to status.hostIP. The OpenTelemetry Collector container Agent expects this to be named `OTEL_EXPORTER_OTLP_ENDPOINT`. Use the [below example snippet](#opentelemetry-kubernetes-example-application-configuration) as a guide. ##### OpenTelemetry Kubernetes Example Collector Configuration @@ -453,7 +453,7 @@ spec: fieldRef: fieldPath: status.hostIP # This is picked up by the opentelemetry sdks - - name: OTEL_EXPORTER_OTLP_SPAN_ENDPOINT + - name: OTEL_EXPORTER_OTLP_ENDPOINT value: "http://$(HOST_IP):55680" ``` From 8f223d58518ca29f9de245613fb0a70a62c7bd11 Mon Sep 17 00:00:00 2001 From: ericmustin Date: Tue, 26 Jan 2021 15:31:32 -0500 Subject: [PATCH 03/13] update otel docs with feedback --- .../setup_overview/open_standards/_index.md | 412 ++++++------------ 1 file changed, 122 insertions(+), 290 deletions(-) diff --git a/content/en/tracing/setup_overview/open_standards/_index.md b/content/en/tracing/setup_overview/open_standards/_index.md index 66494a0920d71..0c38bc4d8f4f6 100644 --- a/content/en/tracing/setup_overview/open_standards/_index.md +++ b/content/en/tracing/setup_overview/open_standards/_index.md @@ -51,7 +51,7 @@ On each OpenTelemetry-instrumented application, set the resource attributes `dev ### Ingesting OpenTelemetry Traces with the Collector -The OpenTelemetry Collector is configured by adding a [pipeline][8] to your `otel-collector-configuration.yml` file. Supply the relative path to this configuration file when you start the collector by passing it in via the `--config=` command line argument. For examples of supplying a configuration file, see the [environment specific setup](#environent-specific-setup) section below or the [OpenTelemetry Collector documentation][9]. +The OpenTelemetry Collector is configured by adding a [pipeline][8] to your `otel-collector-configuration.yml` file. Supply the relative path to this configuration file when you start the collector by passing it in via the `--config=` command line argument. For examples of supplying a configuration file, see the [environment specific setup](#environment-specific-setup) section below or the [OpenTelemetry Collector documentation][9]. The exporter assumes you have a pipeline that uses the `datadog` exporter, and includes a [batch processor][10] configured with the following: - A required `timeout` setting of `10s` (10 seconds). A batch representing 10 seconds of traces is a constraint of Datadog's API Intake for Trace Related Statistics. @@ -93,42 +93,40 @@ service: exporters: [datadog/api] ``` -### Environment Specific Setup +### Environment specific setup -#### Host: +#### Host -- Download the appropriate binary from [the project repository latest release](https://github.com/open-telemetry/opentelemetry-collector-contrib/releases/latest). +1. Download the appropriate binary from [the project repository latest release][11]. -- Create a `otel_collector_config.yaml` file. Here is an example template to get started. It enables the collector's otlp receiver and datadog exporter. +2. Create a `otel_collector_config.yaml` file. [Here is an example template](#ingesting-opentelemetry-traces-with-the-collector) to get started. It enables the collector's OTLP Receiver and Datadog Exporter. -- Run on the host with the configration yaml file set via the `--config` parameter. For example, +3. Run on the host with the configration yaml file set via the `--config` parameter. For example, ``` - curl -L https://github.com/open-telemetry/opentelemetry-collector-contrib/releases/latest/download/otelcontribcol_linux_amd64 | otelcontribcol_linux_amd64 --config otel_collector_config.yaml + otelcontribcol_linux_amd64 --config otel_collector_config.yaml ``` #### Docker Run an Opentelemetry Collector container to receive traces either from the [installed host](#receive-traces-from-host), or from [other containers](#receive-traces-from-other-containers). -##### Receive Traces From Host +##### Receive Traces from host -- Create a `otel_collector_config.yaml` file. [Here is an example template](https://docs.datadoghq.com/tracing/setup_overview/open_standards/#ingesting-opentelemetry-traces-with-the-collector) to get started. It enables the collector's otlp receiver and datadog exporter. +1. Create a `otel_collector_config.yaml` file. [Here is an example template](#ingesting-opentelemetry-traces-with-the-collector) to get started. It enables the collector's OTLP receiver and the Datadog exporter. -- Use a published docker image such as [`otel/opentelemetry-collector-contrib:latest`](https://hub.docker.com/r/otel/opentelemetry-collector-contrib/tags) +2. Choose a published docker image such as [`otel/opentelemetry-collector-contrib:latest`][12]. -- OpenTelemetry Traces are sent to the OpenTelemetry Collector over TCP or UDP on a number of ports, which must be exposed on the container. By default traces are sent over `OTLP/gRPC on port 55680`, but common protocols and their ports include: +3. Determine which ports to open on your container. OpenTelemetry Traces are sent to the OpenTelemetry Collector over TCP or UDP on a number of ports, which must be exposed on the container. By default traces are sent over OTLP/gRPC on port `55680`, but common protocols and their ports include: - ``` - Zipkin/HTTP on port 9411 - Jaeger/gRPC on port 14250 - Jaeger/HTTP on port 14268 - Jaeger/Compact on port 6831 (UDP) - OTLP/gRPC on port 55680 - OTLP/HTTP on port 55681 - ``` + - Zipkin/HTTP on port `9411` + - Jaeger/gRPC on port `14250` + - Jaeger/HTTP on port `14268` + - Jaeger/Compact on port (UDP) `6831` + - OTLP/gRPC on port `55680` + - OTLP/HTTP on port `55681` -- Run the container with the configured ports and an `otel_collector_config.yaml` file. For example: +4. Run the container with the configured ports and an `otel_collector_config.yaml` file. For example: ``` $ docker run \ @@ -137,30 +135,27 @@ Run an Opentelemetry Collector container to receive traces either from the [inst otel/opentelemetry-collector-contrib:latest ``` -- Configure your application with the appropriate Resource attributes for unified service tagging by adding the metadata [described here](https://docs.datadoghq.com/tracing/setup_overview/open_standards/#opentelemetry-collector-datadog-exporter) +5. Configure your application with the appropriate Resource attributes for unified service tagging by [adding metadata](#opentelemetry-collector-datadog-exporter) -##### Receive Traces From Other Containers +##### Receive traces from other containers -- Create a `otel_collector_config.yaml` file. [Here is an example template](https://docs.datadoghq.com/tracing/setup_overview/open_standards/#ingesting-opentelemetry-traces-with-the-collector) to get started. It enables the collector's otlp receiver and datadog exporter. +1. Create a `otel_collector_config.yaml` file. [Here is an example template](#ingesting-opentelemetry-traces-with-the-collector) to get started. It enables the collector's otlp receiver and datadog exporter. -- Configure your application with the appropriate Resource attributes for unified service tagging by adding the metadata [described here](https://docs.datadoghq.com/tracing/setup_overview/open_standards/#opentelemetry-collector-datadog-exporter) +2. Configure your application with the appropriate Resource attributes for unified service tagging by adding the metadata [described here](#opentelemetry-collector-datadog-exporter) -- Create a docker network +3. Create a docker network: `docker network create ` -- Run the OpenTelemetry Collector container and Application container in the same network. *Note*: When running the application container, ensure the environment variable `OTEL_EXPORTER_OTLP_ENDPOINT` is configured to use the appropriate hostname for the Collector. In the example below this would be `opentelemetry-collector` +4. Run the OpenTelemetry Collector container and application container in the same network. *Note*: When running the application container, ensure the environment variable `OTEL_EXPORTER_OTLP_ENDPOINT` is configured to use the appropriate hostname for the OpenTelemetry Collector. In the example below, this is `opentelemetry-collector` ``` # Datadog Agent docker run -d --name opentelemetry-collector \ --network \ - -v /var/run/docker.sock:/var/run/docker.sock:ro \ - -v /proc/:/host/proc/:ro \ - -v /sys/fs/cgroup/:/host/sys/fs/cgroup:ro \ -v $(pwd)/otel_collector_config.yaml:/etc/otel/config.yaml \ - otel/opentelemetry-collector-contrib:latest + otel/opentelemetry-collector-contrib # Application docker run -d --name app \ @@ -171,273 +166,104 @@ Run an Opentelemetry Collector container to receive traces either from the [inst #### Kubernetes -The OpenTelemetry Collector can be run in two types of [deployment scenarios](https://opentelemetry.io/docs/collector/getting-started/#deployment). First, as an OpenTelemetry Collector "agent" running on the same host as the application in a sidecar or daemonset. Second, as a standalone service, e.g. a container or deployment, typically per cluster, datacenter or region. +The OpenTelemetry Collector can be run in two types of [deployment scenarios][13]. + +- As an OpenTelemetry Collector "agent" running on the same host as the application in a sidecar or daemonset; or + +- As a standalone service, e.g. a container or deployment, typically per-cluster, -datacenter or -region. + +To accurately track the appropriate metadata in Datadog, run the OpenTelemetry Collector in agent mode on each of the Kubernetes nodes. + +When deploying the OpenTelemetry Collector as a daemonset, refer to [the example configuration below](#opentelemetry-kubernetes-example-collector-configuration) as a guide. -In order to accurately track the appropriate metadata in Datadog for information and billing purposes, it is recommended the OpenTelemetry Collector be run at least in agent mode on each of the Kubernetes Nodes. +On the application container, use the downward API to pull the host IP. The application container needs an environment variable that points to `status.hostIP`. The OpenTelemetry Application SDKs expects this to be named `OTEL_EXPORTER_OTLP_ENDPOINT`. Use the [below example snippet](#opentelemetry-kubernetes-example-application-configuration) as a guide. -- When deploying the OpenTelemetry Collector as a Daemonset, refer to [the example configuration below](#opentelemetry-kubernetes-example-collector-configuration) as a guide. +##### Example Kubernetes OpenTelemetry Collector configuration -- On the application container, use the downward API to pull the host IP; the application container needs an environment variable that points to status.hostIP. The OpenTelemetry Collector container Agent expects this to be named `OTEL_EXPORTER_OTLP_ENDPOINT`. Use the [below example snippet](#opentelemetry-kubernetes-example-application-configuration) as a guide. +A full example k8s manifest for deploying the OpenTelemetry Collector as both daemonset and standalone collector [can be found here][14]. Depending on your environment this example may be modified, however the important sections to note specific to Datadog are as follows. -##### OpenTelemetry Kubernetes Example Collector Configuration +1. The example demonstrates deploying the OpenTelemetry Collectors in ["agent" mode via daemonset][15], which collect relevant k8s node and pod specific metadata, and then forward telemetry data to an OpenTelemetry Collector in ["standalone collector" mode][16]. This OpenTelemetry Collector in "standalone collector" mode then exports to the Datadog backend. A diagram of this deployment model [can be found here][17]. + +2. For OpenTelemetry Collectors deployed as agent via daemonset, in the Daemonset, `spec.containers.env` should use the downward API to capture `status.podIP` and add it as part of the `OTEL_RESOURCE` environment variable. This is used by the OpenTelemetry Collector's `resourcedetection` and `k8s_tagger` processors, which should be included along with a `batch` processor and added to the `traces` pipeline. + +- In the DaemonSet's `spec.containers.env` section ``` ---- -# Give admin rights to the default account -# so that k8s_tagger can fetch info -# RBAC Config Here ---- -apiVersion: v1 -kind: ConfigMap -metadata: - name: otel-agent-conf - labels: - app: opentelemetry - component: otel-agent-conf -data: - otel-agent-config: | - receivers: - hostmetrics: - collection_interval: 10s - scrapers: - load: - otlp: - protocols: - grpc: - http: - jaeger: - protocols: - grpc: - thrift_compact: - thrift_http: - zipkin: - exporters: - otlp: - endpoint: "otel-collector.default:55680" - insecure: true - processors: - batch: - memory_limiter: - # Same as --mem-ballast-size-mib CLI argument - ballast_size_mib: 165 - # 80% of maximum memory up to 2G - limit_mib: 400 - # 25% of limit up to 2G - spike_limit_mib: 100 - check_interval: 5s - - # The resource detector injects the pod IP - # to every metric so that the k8s_tagger can - # fetch information afterwards. - resourcedetection: - detectors: [env] - timeout: 5s - override: false - # The k8s_tagger in the Agent is in passthrough mode - # so that it only tags with the minimal info for the - # collector k8s_tagger to complete - k8s_tagger: - passthrough: true - extensions: - health_check: {} - service: - extensions: [health_check] - pipelines: - metrics: - receivers: [otlp] - # resourcedetection must come before k8s_tagger - processors: [batch, resourcedetection, k8s_tagger] - exporters: [otlp] - traces: - receivers: [otlp, jaeger, zipkin] - # resourcedetection must come before k8s_tagger - processors: [memory_limiter, resourcedetection, k8s_tagger, batch] - exporters: [otlp] ---- -apiVersion: apps/v1 -kind: DaemonSet -metadata: - name: otel-agent - labels: - app: opentelemetry - component: otel-agent -spec: - selector: - matchLabels: - app: opentelemetry - component: otel-agent - template: - metadata: - labels: - app: opentelemetry - component: otel-agent - spec: - containers: - - command: - - "/otelcontribcol" - - "--config=/conf/otel-agent-config.yaml" - # Memory Ballast size should be max 1/3 to 1/2 of memory. - - "--mem-ballast-size-mib=165" - image: otel/opentelemetry-collector-contrib:latest - name: otel-agent - resources: - limits: - cpu: 500m - memory: 500Mi - requests: - cpu: 100m - memory: 100Mi - ports: - - containerPort: 6831 # Jaeger Thrift Compact - protocol: UDP - - containerPort: 8888 # Prometheus Metrics - - containerPort: 9411 # Default endpoint for Zipkin receiver. - - containerPort: 14250 # Default endpoint for Jaeger gRPC receiver. - - containerPort: 14268 # Default endpoint for Jaeger HTTP receiver. - - containerPort: 55680 # Default OpenTelemetry gRPC receiver port. - - containerPort: 55681 # Default OpenTelemetry HTTP receiver port. - env: - # Get pod ip so that k8s_tagger can tag resources - - name: POD_IP - valueFrom: - fieldRef: - fieldPath: status.podIP - # This is picked up by the resource detector - - name: OTEL_RESOURCE - value: "k8s.pod.ip=$(POD_IP)" - volumeMounts: - - name: otel-agent-config-vol - mountPath: /conf - livenessProbe: - httpGet: - path: / - port: 13133 # Health Check extension default port. - readinessProbe: - httpGet: - path: / - port: 13133 # Health Check extension default port. - volumes: - - configMap: - name: otel-agent-conf - items: - - key: otel-agent-config - path: otel-agent-config.yaml - name: otel-agent-config-vol ---- -apiVersion: v1 -kind: ConfigMap -metadata: - name: otel-collector-conf - labels: - app: opentelemetry - component: otel-collector-conf -data: - otel-collector-config: | - receivers: - otlp: - protocols: - grpc: - http: - processors: - batch: - k8s_tagger: - extensions: - health_check: {} - zpages: {} - exporters: - datadog: - api: - key: - service: - extensions: [health_check, zpages] - pipelines: - metrics/2: - receivers: [otlp] - processors: [batch, k8s_tagger] - exporters: [datadog] - traces/2: - receivers: [otlp] - processors: [batch, k8s_tagger] - exporters: [datadog] ---- -apiVersion: v1 -kind: Service -metadata: - name: otel-collector - labels: - app: opentelemetry - component: otel-collector -spec: - ports: - - name: otlp # Default endpoint for OpenTelemetry receiver. - port: 55680 - protocol: TCP - targetPort: 55680 - - name: metrics # Default endpoint for querying metrics. - port: 8888 - selector: - component: otel-collector ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: otel-collector - labels: - app: opentelemetry - component: otel-collector -spec: - selector: - matchLabels: - app: opentelemetry - component: otel-collector - minReadySeconds: 5 - progressDeadlineSeconds: 120 - replicas: 1 - template: - metadata: - labels: - app: opentelemetry - component: otel-collector - spec: - containers: - - command: - - "/otelcontribcol" - - "--config=/conf/otel-collectorcollector-config.yaml" - - "--log-level=debug" - image: otel/opentelemetry-collector-contrib:latest - name: otel-collector - resources: - limits: - cpu: 1 - memory: 2Gi - requests: - cpu: 200m - memory: 400Mi - ports: - - containerPort: 55679 # Default endpoint for ZPages. - - containerPort: 55680 # Default endpoint for OpenTelemetry receiver. - - containerPort: 8888 # Default endpoint for querying metrics. - volumeMounts: - - name: otel-collector-config-vol - mountPath: /conf - livenessProbe: - httpGet: - path: / - port: 13133 # Health Check extension default port. - readinessProbe: - httpGet: - path: / - port: 13133 # Health Check extension default port. - volumes: - - configMap: - name: otel-collector-conf - items: - - key: otel-collector-config - path: otel-collector-config.yaml - name: otel-collector-config-vol + # ... + env: + # Get pod ip so that k8s_tagger can tag resources + - name: POD_IP + valueFrom: + fieldRef: + fieldPath: status.podIP + # This is picked up by the resource detector + - name: OTEL_RESOURCE + value: "k8s.pod.ip=$(POD_IP)" + # ... +``` + +- In the `otel-agent-conf` ConfigMap's `data.otel-agent-config` `processors` section + +``` + # ... + # The resource detector injects the pod IP + # to every metric so that the k8s_tagger can + # fetch information afterwards. + resourcedetection: + detectors: [env] + timeout: 5s + override: false + # The k8s_tagger in the Agent is in passthrough mode + # so that it only tags with the minimal info for the + # collector k8s_tagger to complete + k8s_tagger: + passthrough: true + batch: + # ... +``` + +- In the `otel-agent-conf` ConfigMap's `data.otel-agent-config` `service.pipelines.traces` section + +``` + # ... + # resourcedetection must come before k8s_tagger + processors: [resourcedetection, k8s_tagger, batch] + # ... ``` -##### Opentelemetry Kubernetes Example Application Configuration +3. For any OpenTelemetry-Collector's in "standalone collector" mode, which receive traces from downstream collectors and export to Datadog's backend, include a `batch` processor configured with a `timeout` of `10s` as well as the `k8s_tagger` enabled. These should be included along with the `datadog` exporter and added to the `traces` pipeline. + +- In the `otel-collector-conf` ConfigMap's `data.otel-collector-config` `processors` section + +``` + # ... + batch: + timeout: 10s + k8s_tagger: + # ... +``` + +- In the `otel-collector-conf` ConfigMap's `data.otel-collector-config` `exporters` section + +``` + exporters: + datadog: + api: + key: +``` + +- In the `otel-agent-conf` ConfigMap's `data.otel-agent-config` `service.pipelines.traces` section + +``` + # ... + processors: [k8s_tagger, batch] + exporters: [datadog] + # ... +``` + +##### Example Kubernetes OpenTelemetry application configuration + +In addition to the OpenTelemetry Collector configuration, ensure OpenTelemetry SDKs installed in an application transmit telemetry data to the Collector by configuring the environment variable `OTEL_EXPORTER_OTLP_ENDPOINT` with the host IP. Use the downward API to pull the host IP, and set it as an environment variable, which is then interpolated when setting the `OTEL_EXPORTER_OTLP_ENDPOINT` environment variable. ``` apiVersion: apps/v1 @@ -457,7 +283,6 @@ spec: value: "http://$(HOST_IP):55680" ``` - To see more information and additional examples of how you might configure your collector, see [the OpenTelemetry Collector configuration documentation][5]. ## Further Reading @@ -474,3 +299,10 @@ To see more information and additional examples of how you might configure your [8]: https://github.com/open-telemetry/opentelemetry-collector/blob/master/docs/design.md#pipelines [9]: https://github.com/open-telemetry/opentelemetry-collector/tree/master/examples [10]: https://github.com/open-telemetry/opentelemetry-collector/tree/master/processor/batchprocessor#batch-processor +[11]: https://github.com/open-telemetry/opentelemetry-collector-contrib/releases/latest +[12]: https://hub.docker.com/r/otel/opentelemetry-collector-contrib/tags +[13]: https://opentelemetry.io/docs/collector/getting-started/#deployment +[14]: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/master/exporter/datadogexporter/example/example_k8s_manifest.yaml +[15]: https://github.com/open-telemetry/opentelemetry-collector/blob/master/docs/design.md#running-as-an-agent +[16]: https://github.com/open-telemetry/opentelemetry-collector/blob/master/docs/design.md#running-as-a-standalone-collector +[17]: https://github.com/open-telemetry/opentelemetry-collector/blob/master/docs/images/opentelemetry-service-deployment-models.png From 025780a90b5efd1d1b288514ff11cbb7d078e943 Mon Sep 17 00:00:00 2001 From: ericmustin Date: Wed, 27 Jan 2021 13:56:21 -0500 Subject: [PATCH 04/13] add grpc protocol to otel config example --- content/en/tracing/setup_overview/open_standards/_index.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/content/en/tracing/setup_overview/open_standards/_index.md b/content/en/tracing/setup_overview/open_standards/_index.md index 0c38bc4d8f4f6..3e880559dcac7 100644 --- a/content/en/tracing/setup_overview/open_standards/_index.md +++ b/content/en/tracing/setup_overview/open_standards/_index.md @@ -66,6 +66,9 @@ Here is an example trace pipeline configured with an `otlp` receiver, `batch` pr ``` receivers: otlp: + protocols: + grpc: + http: processors: batch: From 8b9d7dac561924e3e4b1fd12ee8bde2f0c34df8e Mon Sep 17 00:00:00 2001 From: Eric Mustin Date: Tue, 2 Feb 2021 11:15:10 -0500 Subject: [PATCH 05/13] Update content/en/tracing/setup_overview/open_standards/_index.md Co-authored-by: Pablo Baeyens --- content/en/tracing/setup_overview/open_standards/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/tracing/setup_overview/open_standards/_index.md b/content/en/tracing/setup_overview/open_standards/_index.md index 3e880559dcac7..b898613df59da 100644 --- a/content/en/tracing/setup_overview/open_standards/_index.md +++ b/content/en/tracing/setup_overview/open_standards/_index.md @@ -135,7 +135,7 @@ Run an Opentelemetry Collector container to receive traces either from the [inst $ docker run \ -p 55680:55680 \ -v $(pwd)/otel_collector_config.yaml:/etc/otel/config.yaml \ - otel/opentelemetry-collector-contrib:latest + otel/opentelemetry-collector-contrib ``` 5. Configure your application with the appropriate Resource attributes for unified service tagging by [adding metadata](#opentelemetry-collector-datadog-exporter) From eb009039e93cbd1e241d013c2b0500897c977a80 Mon Sep 17 00:00:00 2001 From: ericmustin Date: Tue, 2 Feb 2021 11:28:39 -0500 Subject: [PATCH 06/13] add feedback round 2 --- .../setup_overview/open_standards/_index.md | 190 +++++++++--------- 1 file changed, 95 insertions(+), 95 deletions(-) diff --git a/content/en/tracing/setup_overview/open_standards/_index.md b/content/en/tracing/setup_overview/open_standards/_index.md index 3e880559dcac7..b1b493da8369c 100644 --- a/content/en/tracing/setup_overview/open_standards/_index.md +++ b/content/en/tracing/setup_overview/open_standards/_index.md @@ -49,7 +49,7 @@ On each OpenTelemetry-instrumented application, set the resource attributes `dev 1. Fully qualified domain name 1. Operating system host name -### Ingesting OpenTelemetry Traces with the Collector +### Ingesting OpenTelemetry traces with the collector The OpenTelemetry Collector is configured by adding a [pipeline][8] to your `otel-collector-configuration.yml` file. Supply the relative path to this configuration file when you start the collector by passing it in via the `--config=` command line argument. For examples of supplying a configuration file, see the [environment specific setup](#environment-specific-setup) section below or the [OpenTelemetry Collector documentation][9]. @@ -104,7 +104,7 @@ service: 2. Create a `otel_collector_config.yaml` file. [Here is an example template](#ingesting-opentelemetry-traces-with-the-collector) to get started. It enables the collector's OTLP Receiver and Datadog Exporter. -3. Run on the host with the configration yaml file set via the `--config` parameter. For example, +3. Run the download on the host, specifying the configration yaml file set via the `--config` parameter. For example: ``` otelcontribcol_linux_amd64 --config otel_collector_config.yaml @@ -114,13 +114,13 @@ service: Run an Opentelemetry Collector container to receive traces either from the [installed host](#receive-traces-from-host), or from [other containers](#receive-traces-from-other-containers). -##### Receive Traces from host +##### Receive traces from host 1. Create a `otel_collector_config.yaml` file. [Here is an example template](#ingesting-opentelemetry-traces-with-the-collector) to get started. It enables the collector's OTLP receiver and the Datadog exporter. -2. Choose a published docker image such as [`otel/opentelemetry-collector-contrib:latest`][12]. +2. Choose a published Docker image such as [`otel/opentelemetry-collector-contrib:latest`][12]. -3. Determine which ports to open on your container. OpenTelemetry Traces are sent to the OpenTelemetry Collector over TCP or UDP on a number of ports, which must be exposed on the container. By default traces are sent over OTLP/gRPC on port `55680`, but common protocols and their ports include: +3. Determine which ports to open on your container. OpenTelemetry traces are sent to the OpenTelemetry Collector over TCP or UDP on a number of ports, which must be exposed on the container. By default, traces are sent over OTLP/gRPC on port `55680`, but common protocols and their ports include: - Zipkin/HTTP on port `9411` - Jaeger/gRPC on port `14250` @@ -135,23 +135,23 @@ Run an Opentelemetry Collector container to receive traces either from the [inst $ docker run \ -p 55680:55680 \ -v $(pwd)/otel_collector_config.yaml:/etc/otel/config.yaml \ - otel/opentelemetry-collector-contrib:latest + otel/opentelemetry-collector-contrib ``` -5. Configure your application with the appropriate Resource attributes for unified service tagging by [adding metadata](#opentelemetry-collector-datadog-exporter) +5. Configure your application with the appropriate resource attributes for unified service tagging by [adding metadata](#opentelemetry-collector-datadog-exporter) ##### Receive traces from other containers -1. Create a `otel_collector_config.yaml` file. [Here is an example template](#ingesting-opentelemetry-traces-with-the-collector) to get started. It enables the collector's otlp receiver and datadog exporter. +1. Create an `otel_collector_config.yaml` file. [Here is an example template](#ingesting-opentelemetry-traces-with-the-collector) to get started. It enables the collector's OTLP receiver and Datadog exporter. -2. Configure your application with the appropriate Resource attributes for unified service tagging by adding the metadata [described here](#opentelemetry-collector-datadog-exporter) +2. Configure your application with the appropriate resource attributes for unified service tagging by adding the metadata [described here](#opentelemetry-collector-datadog-exporter) 3. Create a docker network: `docker network create ` -4. Run the OpenTelemetry Collector container and application container in the same network. *Note*: When running the application container, ensure the environment variable `OTEL_EXPORTER_OTLP_ENDPOINT` is configured to use the appropriate hostname for the OpenTelemetry Collector. In the example below, this is `opentelemetry-collector` +4. Run the OpenTelemetry Collector container and application container in the same network. **Note**: When running the application container, ensure the environment variable `OTEL_EXPORTER_OTLP_ENDPOINT` is configured to use the appropriate hostname for the OpenTelemetry Collector. In the example below, this is `opentelemetry-collector`. ``` # Datadog Agent @@ -169,104 +169,104 @@ Run an Opentelemetry Collector container to receive traces either from the [inst #### Kubernetes -The OpenTelemetry Collector can be run in two types of [deployment scenarios][13]. +The OpenTelemetry Collector can be run in two types of [deployment scenarios][13]: -- As an OpenTelemetry Collector "agent" running on the same host as the application in a sidecar or daemonset; or +- As an OpenTelemetry Collector agent running on the same host as the application in a sidecar or daemonset; or -- As a standalone service, e.g. a container or deployment, typically per-cluster, -datacenter or -region. +- As a standalone service, for example a container or deployment, typically per-cluster, per-datacenter, or per-region. To accurately track the appropriate metadata in Datadog, run the OpenTelemetry Collector in agent mode on each of the Kubernetes nodes. When deploying the OpenTelemetry Collector as a daemonset, refer to [the example configuration below](#opentelemetry-kubernetes-example-collector-configuration) as a guide. -On the application container, use the downward API to pull the host IP. The application container needs an environment variable that points to `status.hostIP`. The OpenTelemetry Application SDKs expects this to be named `OTEL_EXPORTER_OTLP_ENDPOINT`. Use the [below example snippet](#opentelemetry-kubernetes-example-application-configuration) as a guide. +On the application container, use the downward API to pull the host IP. The application container needs an environment variable that points to `status.hostIP`. The OpenTelemetry Application SDKs expect this to be named `OTEL_EXPORTER_OTLP_ENDPOINT`. Use the [below example snippet](#opentelemetry-kubernetes-example-application-configuration) as a guide. ##### Example Kubernetes OpenTelemetry Collector configuration -A full example k8s manifest for deploying the OpenTelemetry Collector as both daemonset and standalone collector [can be found here][14]. Depending on your environment this example may be modified, however the important sections to note specific to Datadog are as follows. - -1. The example demonstrates deploying the OpenTelemetry Collectors in ["agent" mode via daemonset][15], which collect relevant k8s node and pod specific metadata, and then forward telemetry data to an OpenTelemetry Collector in ["standalone collector" mode][16]. This OpenTelemetry Collector in "standalone collector" mode then exports to the Datadog backend. A diagram of this deployment model [can be found here][17]. - -2. For OpenTelemetry Collectors deployed as agent via daemonset, in the Daemonset, `spec.containers.env` should use the downward API to capture `status.podIP` and add it as part of the `OTEL_RESOURCE` environment variable. This is used by the OpenTelemetry Collector's `resourcedetection` and `k8s_tagger` processors, which should be included along with a `batch` processor and added to the `traces` pipeline. - -- In the DaemonSet's `spec.containers.env` section - -``` - # ... - env: - # Get pod ip so that k8s_tagger can tag resources - - name: POD_IP - valueFrom: - fieldRef: - fieldPath: status.podIP - # This is picked up by the resource detector - - name: OTEL_RESOURCE - value: "k8s.pod.ip=$(POD_IP)" - # ... -``` - -- In the `otel-agent-conf` ConfigMap's `data.otel-agent-config` `processors` section - -``` - # ... - # The resource detector injects the pod IP - # to every metric so that the k8s_tagger can - # fetch information afterwards. - resourcedetection: - detectors: [env] - timeout: 5s - override: false - # The k8s_tagger in the Agent is in passthrough mode - # so that it only tags with the minimal info for the - # collector k8s_tagger to complete - k8s_tagger: - passthrough: true - batch: - # ... -``` - -- In the `otel-agent-conf` ConfigMap's `data.otel-agent-config` `service.pipelines.traces` section - -``` - # ... - # resourcedetection must come before k8s_tagger - processors: [resourcedetection, k8s_tagger, batch] - # ... -``` - -3. For any OpenTelemetry-Collector's in "standalone collector" mode, which receive traces from downstream collectors and export to Datadog's backend, include a `batch` processor configured with a `timeout` of `10s` as well as the `k8s_tagger` enabled. These should be included along with the `datadog` exporter and added to the `traces` pipeline. - -- In the `otel-collector-conf` ConfigMap's `data.otel-collector-config` `processors` section - -``` - # ... - batch: - timeout: 10s - k8s_tagger: - # ... -``` - -- In the `otel-collector-conf` ConfigMap's `data.otel-collector-config` `exporters` section - -``` - exporters: - datadog: - api: - key: -``` - -- In the `otel-agent-conf` ConfigMap's `data.otel-agent-config` `service.pipelines.traces` section - -``` - # ... - processors: [k8s_tagger, batch] - exporters: [datadog] - # ... -``` +A full example Kubernetes manifest for deploying the OpenTelemetry Collector as both daemonset and standalone collector [can be found here][14]. Modify the example to suit your environment. The key sections that are specific to Datadog are as follows: + +1. The example demonstrates deploying the OpenTelemetry Collectors in [agent mode via daemonset][15], which collect relevant k8s node and pod specific metadata, and then forward telemetry data to an OpenTelemetry Collector in [standalone collector mode][16]. This OpenTelemetry Collector in standalone collector mode then exports to the Datadog backend. See [this diagram of this deployment model][17]. + +2. For OpenTelemetry Collectors deployed as agent via daemonset, in the daemonset, `spec.containers.env` should use the downward API to capture `status.podIP` and add it as part of the `OTEL_RESOURCE` environment variable. This is used by the OpenTelemetry Collector's `resourcedetection` and `k8s_tagger` processors, which should be included along with a `batch` processor and added to the `traces` pipeline. + + In the daemonset's `spec.containers.env` section: + + ```yaml + # ... + env: + # Get pod ip so that k8s_tagger can tag resources + - name: POD_IP + valueFrom: + fieldRef: + fieldPath: status.podIP + # This is picked up by the resource detector + - name: OTEL_RESOURCE + value: "k8s.pod.ip=$(POD_IP)" + # ... + ``` + + In the `otel-agent-conf` ConfigMap's `data.otel-agent-config` `processors` section: + + ```yaml + # ... + # The resource detector injects the pod IP + # to every metric so that the k8s_tagger can + # fetch information afterwards. + resourcedetection: + detectors: [env] + timeout: 5s + override: false + # The k8s_tagger in the Agent is in passthrough mode + # so that it only tags with the minimal info for the + # collector k8s_tagger to complete + k8s_tagger: + passthrough: true + batch: + # ... + ``` + + In the `otel-agent-conf` ConfigMap's `data.otel-agent-config` `service.pipelines.traces` section: + + ```yaml + # ... + # resourcedetection must come before k8s_tagger + processors: [resourcedetection, k8s_tagger, batch] + # ... + ``` + +3. For OpenTelemetry Collectors in standalone collector mode, which receive traces from downstream collectors and export to Datadog's backend, include a `batch` processor configured with a `timeout` of `10s` as well as the `k8s_tagger` enabled. These should be included along with the `datadog` exporter and added to the `traces` pipeline. + + In the `otel-collector-conf` ConfigMap's `data.otel-collector-config` `processors` section: + + ```yaml + # ... + batch: + timeout: 10s + k8s_tagger: + # ... + ``` + + In the `otel-collector-conf` ConfigMap's `data.otel-collector-config` `exporters` section: + + ```yaml + exporters: + datadog: + api: + key: + ``` + + In the `otel-collector-conf` ConfigMap's `data.otel-collector-config` `service.pipelines.traces` section: + + ```yaml + # ... + processors: [k8s_tagger, batch] + exporters: [datadog] + # ... + ``` ##### Example Kubernetes OpenTelemetry application configuration -In addition to the OpenTelemetry Collector configuration, ensure OpenTelemetry SDKs installed in an application transmit telemetry data to the Collector by configuring the environment variable `OTEL_EXPORTER_OTLP_ENDPOINT` with the host IP. Use the downward API to pull the host IP, and set it as an environment variable, which is then interpolated when setting the `OTEL_EXPORTER_OTLP_ENDPOINT` environment variable. +In addition to the OpenTelemetry Collector configuration, ensure that OpenTelemetry SDKs that are installed in an application transmit telemetry data to the collector, by configuring the environment variable `OTEL_EXPORTER_OTLP_ENDPOINT` with the host IP. Use the downward API to pull the host IP, and set it as an environment variable, which is then interpolated when setting the `OTEL_EXPORTER_OTLP_ENDPOINT` environment variable: ``` apiVersion: apps/v1 From 561818b00d10b8be196a8263378c090ce8213a97 Mon Sep 17 00:00:00 2001 From: ericmustin Date: Wed, 20 Jan 2021 11:21:03 -0500 Subject: [PATCH 07/13] add opentelemtry env specific setup --- .../setup_overview/open_standards/_index.md | 367 +++++++++++++++++- 1 file changed, 366 insertions(+), 1 deletion(-) diff --git a/content/en/tracing/setup_overview/open_standards/_index.md b/content/en/tracing/setup_overview/open_standards/_index.md index 97173c5443010..d753b3d7e0f19 100644 --- a/content/en/tracing/setup_overview/open_standards/_index.md +++ b/content/en/tracing/setup_overview/open_standards/_index.md @@ -51,7 +51,7 @@ On each OpenTelemetry-instrumented application, set the resource attributes `dev ### Ingesting OpenTelemetry Traces with the Collector -The OpenTelemetry Collector is configured by adding a [pipeline][8] to your `otel-collector-configuration.yml` file. Supply the relative path to this configuration file when you start the collector by passing it in via the `--config=` command line argument. For examples of supplying a configuration file, see the [OpenTelemetry Collector documentation][9]. +The OpenTelemetry Collector is configured by adding a [pipeline][8] to your `otel-collector-configuration.yml` file. Supply the relative path to this configuration file when you start the collector by passing it in via the `--config=` command line argument. For examples of supplying a configuration file, see the [environment specific setup](#environent-specific-setup) section below or the [OpenTelemetry Collector documentation][9]. The exporter assumes you have a pipeline that uses the `datadog` exporter, and includes a [batch processor][10] configured with the following: - A required `timeout` setting of `10s` (10 seconds). A batch representing 10 seconds of traces is a constraint of Datadog's API Intake for Trace Related Statistics. @@ -93,6 +93,371 @@ service: exporters: [datadog/api] ``` +### Environment Specific Setup + +#### Host: + +- Download the appropriate binary from [the project repository latest release](https://github.com/open-telemetry/opentelemetry-collector-contrib/releases/latest). + +- Create a `otel_collector_config.yaml` file. Here is an example template to get started. It enables the collector's otlp receiver and datadog exporter. + +- Run on the host with the configration yaml file set via the `--config` parameter. For example, + + ``` + curl -L https://github.com/open-telemetry/opentelemetry-collector-contrib/releases/latest/download/otelcontribcol_linux_amd64 | otelcontribcol_linux_amd64 --config otel_collector_config.yaml + ``` + +#### Docker + +Run an Opentelemetry Collector container to receive traces either from the [installed host](#receive-traces-from-host), or from [other containers](#receive-traces-from-other-containers). + +##### Receive Traces From Host + +- Create a `otel_collector_config.yaml` file. [Here is an example template](https://docs.datadoghq.com/tracing/setup_overview/open_standards/#ingesting-opentelemetry-traces-with-the-collector) to get started. It enables the collector's otlp receiver and datadog exporter. + +- Use a published docker image such as [`otel/opentelemetry-collector-contrib:latest`](https://hub.docker.com/r/otel/opentelemetry-collector-contrib/tags) + +- OpenTelemetry Traces are sent to the OpenTelemetry Collector over TCP or UDP on a number of ports, which must be exposed on the container. By default traces are sent over `OTLP/gRPC on port 55680`, but common protocols and their ports include: + + ``` + Zipkin/HTTP on port 9411 + Jaeger/gRPC on port 14250 + Jaeger/HTTP on port 14268 + Jaeger/Compact on port 6831 (UDP) + OTLP/gRPC on port 55680 + OTLP/HTTP on port 55681 + ``` + +- Run the container with the configured ports and an `otel_collector_config.yaml` file. For example: + + ``` + $ docker run \ + -p 55680:55680 \ + -v $(pwd)/otel_collector_config.yaml:/etc/otel/config.yaml \ + otel/opentelemetry-collector-contrib:latest + ``` + +- Configure your application with the appropriate Resource attributes for unified service tagging by adding the metadata [described here](https://docs.datadoghq.com/tracing/setup_overview/open_standards/#opentelemetry-collector-datadog-exporter) + +##### Receive Traces From Other Containers + +- Create a `otel_collector_config.yaml` file. [Here is an example template](https://docs.datadoghq.com/tracing/setup_overview/open_standards/#ingesting-opentelemetry-traces-with-the-collector) to get started. It enables the collector's otlp receiver and datadog exporter. + + +- Configure your application with the appropriate Resource attributes for unified service tagging by adding the metadata [described here](https://docs.datadoghq.com/tracing/setup_overview/open_standards/#opentelemetry-collector-datadog-exporter) + +- Create a docker network + + `docker network create ` + +- Run the OpenTelemetry Collector container and Application container in the same network. *Note*: When running the application container, ensure the environment variable `OTEL_EXPORTER_OTLP_SPAN_ENDPOINT` is configured to use the appropriate hostname for the Collector. In the example below this would be `opentelemetry-collector` + + ``` + # Datadog Agent + docker run -d --name opentelemetry-collector \ + --network \ + -v /var/run/docker.sock:/var/run/docker.sock:ro \ + -v /proc/:/host/proc/:ro \ + -v /sys/fs/cgroup/:/host/sys/fs/cgroup:ro \ + -v $(pwd)/otel_collector_config.yaml:/etc/otel/config.yaml \ + otel/opentelemetry-collector-contrib:latest + + # Application + docker run -d --name app \ + --network \ + -e OTEL_EXPORTER_OTLP_SPAN_ENDPOINT=http://opentelemetry-collector:55680 \ + company/app:latest + ``` + +#### Kubernetes + +The OpenTelemetry Collector can be run in two types of [deployment scenarios](https://opentelemetry.io/docs/collector/getting-started/#deployment). First, as an OpenTelemetry Collector "agent" running on the same host as the application in a sidecar or daemonset. Second, as a standalone service, e.g. a container or deployment, typically per cluster, datacenter or region. + +In order to accurately track the appropriate metadata in Datadog for information and billing purposes, it is recommended the OpenTelemetry Collector be run at least in agent mode on each of the Kubernetes Nodes. + +- When deploying the OpenTelemetry Collector as a Daemonset, refer to [the example configuration below](#opentelemetry-kubernetes-example-collector-configuration) as a guide. + +- On the application container, use the downward API to pull the host IP; the application container needs an environment variable that points to status.hostIP. The OpenTelemetry Collector container Agent expects this to be named `OTEL_EXPORTER_OTLP_SPAN_ENDPOINT`. Use the [below example snippet](#opentelemetry-kubernetes-example-application-configuration) as a guide. + +##### OpenTelemetry Kubernetes Example Collector Configuration + +``` +--- +# Give admin rights to the default account +# so that k8s_tagger can fetch info +# RBAC Config Here +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: otel-agent-conf + labels: + app: opentelemetry + component: otel-agent-conf +data: + otel-agent-config: | + receivers: + hostmetrics: + collection_interval: 10s + scrapers: + load: + otlp: + protocols: + grpc: + http: + jaeger: + protocols: + grpc: + thrift_compact: + thrift_http: + zipkin: + exporters: + otlp: + endpoint: "otel-collector.default:55680" + insecure: true + processors: + batch: + memory_limiter: + # Same as --mem-ballast-size-mib CLI argument + ballast_size_mib: 165 + # 80% of maximum memory up to 2G + limit_mib: 400 + # 25% of limit up to 2G + spike_limit_mib: 100 + check_interval: 5s + + # The resource detector injects the pod IP + # to every metric so that the k8s_tagger can + # fetch information afterwards. + resourcedetection: + detectors: [env] + timeout: 5s + override: false + # The k8s_tagger in the Agent is in passthrough mode + # so that it only tags with the minimal info for the + # collector k8s_tagger to complete + k8s_tagger: + passthrough: true + extensions: + health_check: {} + service: + extensions: [health_check] + pipelines: + metrics: + receivers: [otlp] + # resourcedetection must come before k8s_tagger + processors: [batch, resourcedetection, k8s_tagger] + exporters: [otlp] + traces: + receivers: [otlp, jaeger, zipkin] + # resourcedetection must come before k8s_tagger + processors: [memory_limiter, resourcedetection, k8s_tagger, batch] + exporters: [otlp] +--- +apiVersion: apps/v1 +kind: DaemonSet +metadata: + name: otel-agent + labels: + app: opentelemetry + component: otel-agent +spec: + selector: + matchLabels: + app: opentelemetry + component: otel-agent + template: + metadata: + labels: + app: opentelemetry + component: otel-agent + spec: + containers: + - command: + - "/otelcontribcol" + - "--config=/conf/otel-agent-config.yaml" + # Memory Ballast size should be max 1/3 to 1/2 of memory. + - "--mem-ballast-size-mib=165" + image: otel/opentelemetry-collector-contrib:latest + name: otel-agent + resources: + limits: + cpu: 500m + memory: 500Mi + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6831 # Jaeger Thrift Compact + protocol: UDP + - containerPort: 8888 # Prometheus Metrics + - containerPort: 9411 # Default endpoint for Zipkin receiver. + - containerPort: 14250 # Default endpoint for Jaeger gRPC receiver. + - containerPort: 14268 # Default endpoint for Jaeger HTTP receiver. + - containerPort: 55680 # Default OpenTelemetry gRPC receiver port. + - containerPort: 55681 # Default OpenTelemetry HTTP receiver port. + env: + # Get pod ip so that k8s_tagger can tag resources + - name: POD_IP + valueFrom: + fieldRef: + fieldPath: status.podIP + # This is picked up by the resource detector + - name: OTEL_RESOURCE + value: "k8s.pod.ip=$(POD_IP)" + volumeMounts: + - name: otel-agent-config-vol + mountPath: /conf + livenessProbe: + httpGet: + path: / + port: 13133 # Health Check extension default port. + readinessProbe: + httpGet: + path: / + port: 13133 # Health Check extension default port. + volumes: + - configMap: + name: otel-agent-conf + items: + - key: otel-agent-config + path: otel-agent-config.yaml + name: otel-agent-config-vol +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: otel-collector-conf + labels: + app: opentelemetry + component: otel-collector-conf +data: + otel-collector-config: | + receivers: + otlp: + protocols: + grpc: + http: + processors: + batch: + k8s_tagger: + extensions: + health_check: {} + zpages: {} + exporters: + datadog: + api: + key: + service: + extensions: [health_check, zpages] + pipelines: + metrics/2: + receivers: [otlp] + processors: [batch, k8s_tagger] + exporters: [datadog] + traces/2: + receivers: [otlp] + processors: [batch, k8s_tagger] + exporters: [datadog] +--- +apiVersion: v1 +kind: Service +metadata: + name: otel-collector + labels: + app: opentelemetry + component: otel-collector +spec: + ports: + - name: otlp # Default endpoint for OpenTelemetry receiver. + port: 55680 + protocol: TCP + targetPort: 55680 + - name: metrics # Default endpoint for querying metrics. + port: 8888 + selector: + component: otel-collector +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: otel-collector + labels: + app: opentelemetry + component: otel-collector +spec: + selector: + matchLabels: + app: opentelemetry + component: otel-collector + minReadySeconds: 5 + progressDeadlineSeconds: 120 + replicas: 1 + template: + metadata: + labels: + app: opentelemetry + component: otel-collector + spec: + containers: + - command: + - "/otelcontribcol" + - "--config=/conf/otel-collectorcollector-config.yaml" + - "--log-level=debug" + image: otel/opentelemetry-collector-contrib:latest + name: otel-collector + resources: + limits: + cpu: 1 + memory: 2Gi + requests: + cpu: 200m + memory: 400Mi + ports: + - containerPort: 55679 # Default endpoint for ZPages. + - containerPort: 55680 # Default endpoint for OpenTelemetry receiver. + - containerPort: 8888 # Default endpoint for querying metrics. + volumeMounts: + - name: otel-collector-config-vol + mountPath: /conf + livenessProbe: + httpGet: + path: / + port: 13133 # Health Check extension default port. + readinessProbe: + httpGet: + path: / + port: 13133 # Health Check extension default port. + volumes: + - configMap: + name: otel-collector-conf + items: + - key: otel-collector-config + path: otel-collector-config.yaml + name: otel-collector-config-vol +``` + +##### Opentelemetry Kubernetes Example Application Configuration + +``` +apiVersion: apps/v1 +kind: Deployment +... +spec: + containers: + - name: + image: / + env: + - name: HOST_IP + valueFrom: + fieldRef: + fieldPath: status.hostIP + # This is picked up by the opentelemetry sdks + - name: OTEL_EXPORTER_OTLP_SPAN_ENDPOINT + value: "http://$(HOST_IP):55680" +``` + + To see more information and additional examples of how you might configure your collector, see [the OpenTelemetry Collector configuration documentation][5]. ## Further Reading From 8bd6e092e7db57290f973ed3878e869f01347860 Mon Sep 17 00:00:00 2001 From: ericmustin Date: Wed, 20 Jan 2021 15:22:42 -0500 Subject: [PATCH 08/13] update env var for hostname config --- .../en/tracing/setup_overview/open_standards/_index.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/content/en/tracing/setup_overview/open_standards/_index.md b/content/en/tracing/setup_overview/open_standards/_index.md index d753b3d7e0f19..66494a0920d71 100644 --- a/content/en/tracing/setup_overview/open_standards/_index.md +++ b/content/en/tracing/setup_overview/open_standards/_index.md @@ -150,7 +150,7 @@ Run an Opentelemetry Collector container to receive traces either from the [inst `docker network create ` -- Run the OpenTelemetry Collector container and Application container in the same network. *Note*: When running the application container, ensure the environment variable `OTEL_EXPORTER_OTLP_SPAN_ENDPOINT` is configured to use the appropriate hostname for the Collector. In the example below this would be `opentelemetry-collector` +- Run the OpenTelemetry Collector container and Application container in the same network. *Note*: When running the application container, ensure the environment variable `OTEL_EXPORTER_OTLP_ENDPOINT` is configured to use the appropriate hostname for the Collector. In the example below this would be `opentelemetry-collector` ``` # Datadog Agent @@ -165,7 +165,7 @@ Run an Opentelemetry Collector container to receive traces either from the [inst # Application docker run -d --name app \ --network \ - -e OTEL_EXPORTER_OTLP_SPAN_ENDPOINT=http://opentelemetry-collector:55680 \ + -e OTEL_EXPORTER_OTLP_ENDPOINT=http://opentelemetry-collector:55680 \ company/app:latest ``` @@ -177,7 +177,7 @@ In order to accurately track the appropriate metadata in Datadog for information - When deploying the OpenTelemetry Collector as a Daemonset, refer to [the example configuration below](#opentelemetry-kubernetes-example-collector-configuration) as a guide. -- On the application container, use the downward API to pull the host IP; the application container needs an environment variable that points to status.hostIP. The OpenTelemetry Collector container Agent expects this to be named `OTEL_EXPORTER_OTLP_SPAN_ENDPOINT`. Use the [below example snippet](#opentelemetry-kubernetes-example-application-configuration) as a guide. +- On the application container, use the downward API to pull the host IP; the application container needs an environment variable that points to status.hostIP. The OpenTelemetry Collector container Agent expects this to be named `OTEL_EXPORTER_OTLP_ENDPOINT`. Use the [below example snippet](#opentelemetry-kubernetes-example-application-configuration) as a guide. ##### OpenTelemetry Kubernetes Example Collector Configuration @@ -453,7 +453,7 @@ spec: fieldRef: fieldPath: status.hostIP # This is picked up by the opentelemetry sdks - - name: OTEL_EXPORTER_OTLP_SPAN_ENDPOINT + - name: OTEL_EXPORTER_OTLP_ENDPOINT value: "http://$(HOST_IP):55680" ``` From 8010adeb26e56daeb276dffc9b2b24284880a5f8 Mon Sep 17 00:00:00 2001 From: ericmustin Date: Tue, 26 Jan 2021 15:31:32 -0500 Subject: [PATCH 09/13] update otel docs with feedback --- .../setup_overview/open_standards/_index.md | 412 ++++++------------ 1 file changed, 122 insertions(+), 290 deletions(-) diff --git a/content/en/tracing/setup_overview/open_standards/_index.md b/content/en/tracing/setup_overview/open_standards/_index.md index 66494a0920d71..0c38bc4d8f4f6 100644 --- a/content/en/tracing/setup_overview/open_standards/_index.md +++ b/content/en/tracing/setup_overview/open_standards/_index.md @@ -51,7 +51,7 @@ On each OpenTelemetry-instrumented application, set the resource attributes `dev ### Ingesting OpenTelemetry Traces with the Collector -The OpenTelemetry Collector is configured by adding a [pipeline][8] to your `otel-collector-configuration.yml` file. Supply the relative path to this configuration file when you start the collector by passing it in via the `--config=` command line argument. For examples of supplying a configuration file, see the [environment specific setup](#environent-specific-setup) section below or the [OpenTelemetry Collector documentation][9]. +The OpenTelemetry Collector is configured by adding a [pipeline][8] to your `otel-collector-configuration.yml` file. Supply the relative path to this configuration file when you start the collector by passing it in via the `--config=` command line argument. For examples of supplying a configuration file, see the [environment specific setup](#environment-specific-setup) section below or the [OpenTelemetry Collector documentation][9]. The exporter assumes you have a pipeline that uses the `datadog` exporter, and includes a [batch processor][10] configured with the following: - A required `timeout` setting of `10s` (10 seconds). A batch representing 10 seconds of traces is a constraint of Datadog's API Intake for Trace Related Statistics. @@ -93,42 +93,40 @@ service: exporters: [datadog/api] ``` -### Environment Specific Setup +### Environment specific setup -#### Host: +#### Host -- Download the appropriate binary from [the project repository latest release](https://github.com/open-telemetry/opentelemetry-collector-contrib/releases/latest). +1. Download the appropriate binary from [the project repository latest release][11]. -- Create a `otel_collector_config.yaml` file. Here is an example template to get started. It enables the collector's otlp receiver and datadog exporter. +2. Create a `otel_collector_config.yaml` file. [Here is an example template](#ingesting-opentelemetry-traces-with-the-collector) to get started. It enables the collector's OTLP Receiver and Datadog Exporter. -- Run on the host with the configration yaml file set via the `--config` parameter. For example, +3. Run on the host with the configration yaml file set via the `--config` parameter. For example, ``` - curl -L https://github.com/open-telemetry/opentelemetry-collector-contrib/releases/latest/download/otelcontribcol_linux_amd64 | otelcontribcol_linux_amd64 --config otel_collector_config.yaml + otelcontribcol_linux_amd64 --config otel_collector_config.yaml ``` #### Docker Run an Opentelemetry Collector container to receive traces either from the [installed host](#receive-traces-from-host), or from [other containers](#receive-traces-from-other-containers). -##### Receive Traces From Host +##### Receive Traces from host -- Create a `otel_collector_config.yaml` file. [Here is an example template](https://docs.datadoghq.com/tracing/setup_overview/open_standards/#ingesting-opentelemetry-traces-with-the-collector) to get started. It enables the collector's otlp receiver and datadog exporter. +1. Create a `otel_collector_config.yaml` file. [Here is an example template](#ingesting-opentelemetry-traces-with-the-collector) to get started. It enables the collector's OTLP receiver and the Datadog exporter. -- Use a published docker image such as [`otel/opentelemetry-collector-contrib:latest`](https://hub.docker.com/r/otel/opentelemetry-collector-contrib/tags) +2. Choose a published docker image such as [`otel/opentelemetry-collector-contrib:latest`][12]. -- OpenTelemetry Traces are sent to the OpenTelemetry Collector over TCP or UDP on a number of ports, which must be exposed on the container. By default traces are sent over `OTLP/gRPC on port 55680`, but common protocols and their ports include: +3. Determine which ports to open on your container. OpenTelemetry Traces are sent to the OpenTelemetry Collector over TCP or UDP on a number of ports, which must be exposed on the container. By default traces are sent over OTLP/gRPC on port `55680`, but common protocols and their ports include: - ``` - Zipkin/HTTP on port 9411 - Jaeger/gRPC on port 14250 - Jaeger/HTTP on port 14268 - Jaeger/Compact on port 6831 (UDP) - OTLP/gRPC on port 55680 - OTLP/HTTP on port 55681 - ``` + - Zipkin/HTTP on port `9411` + - Jaeger/gRPC on port `14250` + - Jaeger/HTTP on port `14268` + - Jaeger/Compact on port (UDP) `6831` + - OTLP/gRPC on port `55680` + - OTLP/HTTP on port `55681` -- Run the container with the configured ports and an `otel_collector_config.yaml` file. For example: +4. Run the container with the configured ports and an `otel_collector_config.yaml` file. For example: ``` $ docker run \ @@ -137,30 +135,27 @@ Run an Opentelemetry Collector container to receive traces either from the [inst otel/opentelemetry-collector-contrib:latest ``` -- Configure your application with the appropriate Resource attributes for unified service tagging by adding the metadata [described here](https://docs.datadoghq.com/tracing/setup_overview/open_standards/#opentelemetry-collector-datadog-exporter) +5. Configure your application with the appropriate Resource attributes for unified service tagging by [adding metadata](#opentelemetry-collector-datadog-exporter) -##### Receive Traces From Other Containers +##### Receive traces from other containers -- Create a `otel_collector_config.yaml` file. [Here is an example template](https://docs.datadoghq.com/tracing/setup_overview/open_standards/#ingesting-opentelemetry-traces-with-the-collector) to get started. It enables the collector's otlp receiver and datadog exporter. +1. Create a `otel_collector_config.yaml` file. [Here is an example template](#ingesting-opentelemetry-traces-with-the-collector) to get started. It enables the collector's otlp receiver and datadog exporter. -- Configure your application with the appropriate Resource attributes for unified service tagging by adding the metadata [described here](https://docs.datadoghq.com/tracing/setup_overview/open_standards/#opentelemetry-collector-datadog-exporter) +2. Configure your application with the appropriate Resource attributes for unified service tagging by adding the metadata [described here](#opentelemetry-collector-datadog-exporter) -- Create a docker network +3. Create a docker network: `docker network create ` -- Run the OpenTelemetry Collector container and Application container in the same network. *Note*: When running the application container, ensure the environment variable `OTEL_EXPORTER_OTLP_ENDPOINT` is configured to use the appropriate hostname for the Collector. In the example below this would be `opentelemetry-collector` +4. Run the OpenTelemetry Collector container and application container in the same network. *Note*: When running the application container, ensure the environment variable `OTEL_EXPORTER_OTLP_ENDPOINT` is configured to use the appropriate hostname for the OpenTelemetry Collector. In the example below, this is `opentelemetry-collector` ``` # Datadog Agent docker run -d --name opentelemetry-collector \ --network \ - -v /var/run/docker.sock:/var/run/docker.sock:ro \ - -v /proc/:/host/proc/:ro \ - -v /sys/fs/cgroup/:/host/sys/fs/cgroup:ro \ -v $(pwd)/otel_collector_config.yaml:/etc/otel/config.yaml \ - otel/opentelemetry-collector-contrib:latest + otel/opentelemetry-collector-contrib # Application docker run -d --name app \ @@ -171,273 +166,104 @@ Run an Opentelemetry Collector container to receive traces either from the [inst #### Kubernetes -The OpenTelemetry Collector can be run in two types of [deployment scenarios](https://opentelemetry.io/docs/collector/getting-started/#deployment). First, as an OpenTelemetry Collector "agent" running on the same host as the application in a sidecar or daemonset. Second, as a standalone service, e.g. a container or deployment, typically per cluster, datacenter or region. +The OpenTelemetry Collector can be run in two types of [deployment scenarios][13]. + +- As an OpenTelemetry Collector "agent" running on the same host as the application in a sidecar or daemonset; or + +- As a standalone service, e.g. a container or deployment, typically per-cluster, -datacenter or -region. + +To accurately track the appropriate metadata in Datadog, run the OpenTelemetry Collector in agent mode on each of the Kubernetes nodes. + +When deploying the OpenTelemetry Collector as a daemonset, refer to [the example configuration below](#opentelemetry-kubernetes-example-collector-configuration) as a guide. -In order to accurately track the appropriate metadata in Datadog for information and billing purposes, it is recommended the OpenTelemetry Collector be run at least in agent mode on each of the Kubernetes Nodes. +On the application container, use the downward API to pull the host IP. The application container needs an environment variable that points to `status.hostIP`. The OpenTelemetry Application SDKs expects this to be named `OTEL_EXPORTER_OTLP_ENDPOINT`. Use the [below example snippet](#opentelemetry-kubernetes-example-application-configuration) as a guide. -- When deploying the OpenTelemetry Collector as a Daemonset, refer to [the example configuration below](#opentelemetry-kubernetes-example-collector-configuration) as a guide. +##### Example Kubernetes OpenTelemetry Collector configuration -- On the application container, use the downward API to pull the host IP; the application container needs an environment variable that points to status.hostIP. The OpenTelemetry Collector container Agent expects this to be named `OTEL_EXPORTER_OTLP_ENDPOINT`. Use the [below example snippet](#opentelemetry-kubernetes-example-application-configuration) as a guide. +A full example k8s manifest for deploying the OpenTelemetry Collector as both daemonset and standalone collector [can be found here][14]. Depending on your environment this example may be modified, however the important sections to note specific to Datadog are as follows. -##### OpenTelemetry Kubernetes Example Collector Configuration +1. The example demonstrates deploying the OpenTelemetry Collectors in ["agent" mode via daemonset][15], which collect relevant k8s node and pod specific metadata, and then forward telemetry data to an OpenTelemetry Collector in ["standalone collector" mode][16]. This OpenTelemetry Collector in "standalone collector" mode then exports to the Datadog backend. A diagram of this deployment model [can be found here][17]. + +2. For OpenTelemetry Collectors deployed as agent via daemonset, in the Daemonset, `spec.containers.env` should use the downward API to capture `status.podIP` and add it as part of the `OTEL_RESOURCE` environment variable. This is used by the OpenTelemetry Collector's `resourcedetection` and `k8s_tagger` processors, which should be included along with a `batch` processor and added to the `traces` pipeline. + +- In the DaemonSet's `spec.containers.env` section ``` ---- -# Give admin rights to the default account -# so that k8s_tagger can fetch info -# RBAC Config Here ---- -apiVersion: v1 -kind: ConfigMap -metadata: - name: otel-agent-conf - labels: - app: opentelemetry - component: otel-agent-conf -data: - otel-agent-config: | - receivers: - hostmetrics: - collection_interval: 10s - scrapers: - load: - otlp: - protocols: - grpc: - http: - jaeger: - protocols: - grpc: - thrift_compact: - thrift_http: - zipkin: - exporters: - otlp: - endpoint: "otel-collector.default:55680" - insecure: true - processors: - batch: - memory_limiter: - # Same as --mem-ballast-size-mib CLI argument - ballast_size_mib: 165 - # 80% of maximum memory up to 2G - limit_mib: 400 - # 25% of limit up to 2G - spike_limit_mib: 100 - check_interval: 5s - - # The resource detector injects the pod IP - # to every metric so that the k8s_tagger can - # fetch information afterwards. - resourcedetection: - detectors: [env] - timeout: 5s - override: false - # The k8s_tagger in the Agent is in passthrough mode - # so that it only tags with the minimal info for the - # collector k8s_tagger to complete - k8s_tagger: - passthrough: true - extensions: - health_check: {} - service: - extensions: [health_check] - pipelines: - metrics: - receivers: [otlp] - # resourcedetection must come before k8s_tagger - processors: [batch, resourcedetection, k8s_tagger] - exporters: [otlp] - traces: - receivers: [otlp, jaeger, zipkin] - # resourcedetection must come before k8s_tagger - processors: [memory_limiter, resourcedetection, k8s_tagger, batch] - exporters: [otlp] ---- -apiVersion: apps/v1 -kind: DaemonSet -metadata: - name: otel-agent - labels: - app: opentelemetry - component: otel-agent -spec: - selector: - matchLabels: - app: opentelemetry - component: otel-agent - template: - metadata: - labels: - app: opentelemetry - component: otel-agent - spec: - containers: - - command: - - "/otelcontribcol" - - "--config=/conf/otel-agent-config.yaml" - # Memory Ballast size should be max 1/3 to 1/2 of memory. - - "--mem-ballast-size-mib=165" - image: otel/opentelemetry-collector-contrib:latest - name: otel-agent - resources: - limits: - cpu: 500m - memory: 500Mi - requests: - cpu: 100m - memory: 100Mi - ports: - - containerPort: 6831 # Jaeger Thrift Compact - protocol: UDP - - containerPort: 8888 # Prometheus Metrics - - containerPort: 9411 # Default endpoint for Zipkin receiver. - - containerPort: 14250 # Default endpoint for Jaeger gRPC receiver. - - containerPort: 14268 # Default endpoint for Jaeger HTTP receiver. - - containerPort: 55680 # Default OpenTelemetry gRPC receiver port. - - containerPort: 55681 # Default OpenTelemetry HTTP receiver port. - env: - # Get pod ip so that k8s_tagger can tag resources - - name: POD_IP - valueFrom: - fieldRef: - fieldPath: status.podIP - # This is picked up by the resource detector - - name: OTEL_RESOURCE - value: "k8s.pod.ip=$(POD_IP)" - volumeMounts: - - name: otel-agent-config-vol - mountPath: /conf - livenessProbe: - httpGet: - path: / - port: 13133 # Health Check extension default port. - readinessProbe: - httpGet: - path: / - port: 13133 # Health Check extension default port. - volumes: - - configMap: - name: otel-agent-conf - items: - - key: otel-agent-config - path: otel-agent-config.yaml - name: otel-agent-config-vol ---- -apiVersion: v1 -kind: ConfigMap -metadata: - name: otel-collector-conf - labels: - app: opentelemetry - component: otel-collector-conf -data: - otel-collector-config: | - receivers: - otlp: - protocols: - grpc: - http: - processors: - batch: - k8s_tagger: - extensions: - health_check: {} - zpages: {} - exporters: - datadog: - api: - key: - service: - extensions: [health_check, zpages] - pipelines: - metrics/2: - receivers: [otlp] - processors: [batch, k8s_tagger] - exporters: [datadog] - traces/2: - receivers: [otlp] - processors: [batch, k8s_tagger] - exporters: [datadog] ---- -apiVersion: v1 -kind: Service -metadata: - name: otel-collector - labels: - app: opentelemetry - component: otel-collector -spec: - ports: - - name: otlp # Default endpoint for OpenTelemetry receiver. - port: 55680 - protocol: TCP - targetPort: 55680 - - name: metrics # Default endpoint for querying metrics. - port: 8888 - selector: - component: otel-collector ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: otel-collector - labels: - app: opentelemetry - component: otel-collector -spec: - selector: - matchLabels: - app: opentelemetry - component: otel-collector - minReadySeconds: 5 - progressDeadlineSeconds: 120 - replicas: 1 - template: - metadata: - labels: - app: opentelemetry - component: otel-collector - spec: - containers: - - command: - - "/otelcontribcol" - - "--config=/conf/otel-collectorcollector-config.yaml" - - "--log-level=debug" - image: otel/opentelemetry-collector-contrib:latest - name: otel-collector - resources: - limits: - cpu: 1 - memory: 2Gi - requests: - cpu: 200m - memory: 400Mi - ports: - - containerPort: 55679 # Default endpoint for ZPages. - - containerPort: 55680 # Default endpoint for OpenTelemetry receiver. - - containerPort: 8888 # Default endpoint for querying metrics. - volumeMounts: - - name: otel-collector-config-vol - mountPath: /conf - livenessProbe: - httpGet: - path: / - port: 13133 # Health Check extension default port. - readinessProbe: - httpGet: - path: / - port: 13133 # Health Check extension default port. - volumes: - - configMap: - name: otel-collector-conf - items: - - key: otel-collector-config - path: otel-collector-config.yaml - name: otel-collector-config-vol + # ... + env: + # Get pod ip so that k8s_tagger can tag resources + - name: POD_IP + valueFrom: + fieldRef: + fieldPath: status.podIP + # This is picked up by the resource detector + - name: OTEL_RESOURCE + value: "k8s.pod.ip=$(POD_IP)" + # ... +``` + +- In the `otel-agent-conf` ConfigMap's `data.otel-agent-config` `processors` section + +``` + # ... + # The resource detector injects the pod IP + # to every metric so that the k8s_tagger can + # fetch information afterwards. + resourcedetection: + detectors: [env] + timeout: 5s + override: false + # The k8s_tagger in the Agent is in passthrough mode + # so that it only tags with the minimal info for the + # collector k8s_tagger to complete + k8s_tagger: + passthrough: true + batch: + # ... +``` + +- In the `otel-agent-conf` ConfigMap's `data.otel-agent-config` `service.pipelines.traces` section + +``` + # ... + # resourcedetection must come before k8s_tagger + processors: [resourcedetection, k8s_tagger, batch] + # ... ``` -##### Opentelemetry Kubernetes Example Application Configuration +3. For any OpenTelemetry-Collector's in "standalone collector" mode, which receive traces from downstream collectors and export to Datadog's backend, include a `batch` processor configured with a `timeout` of `10s` as well as the `k8s_tagger` enabled. These should be included along with the `datadog` exporter and added to the `traces` pipeline. + +- In the `otel-collector-conf` ConfigMap's `data.otel-collector-config` `processors` section + +``` + # ... + batch: + timeout: 10s + k8s_tagger: + # ... +``` + +- In the `otel-collector-conf` ConfigMap's `data.otel-collector-config` `exporters` section + +``` + exporters: + datadog: + api: + key: +``` + +- In the `otel-agent-conf` ConfigMap's `data.otel-agent-config` `service.pipelines.traces` section + +``` + # ... + processors: [k8s_tagger, batch] + exporters: [datadog] + # ... +``` + +##### Example Kubernetes OpenTelemetry application configuration + +In addition to the OpenTelemetry Collector configuration, ensure OpenTelemetry SDKs installed in an application transmit telemetry data to the Collector by configuring the environment variable `OTEL_EXPORTER_OTLP_ENDPOINT` with the host IP. Use the downward API to pull the host IP, and set it as an environment variable, which is then interpolated when setting the `OTEL_EXPORTER_OTLP_ENDPOINT` environment variable. ``` apiVersion: apps/v1 @@ -457,7 +283,6 @@ spec: value: "http://$(HOST_IP):55680" ``` - To see more information and additional examples of how you might configure your collector, see [the OpenTelemetry Collector configuration documentation][5]. ## Further Reading @@ -474,3 +299,10 @@ To see more information and additional examples of how you might configure your [8]: https://github.com/open-telemetry/opentelemetry-collector/blob/master/docs/design.md#pipelines [9]: https://github.com/open-telemetry/opentelemetry-collector/tree/master/examples [10]: https://github.com/open-telemetry/opentelemetry-collector/tree/master/processor/batchprocessor#batch-processor +[11]: https://github.com/open-telemetry/opentelemetry-collector-contrib/releases/latest +[12]: https://hub.docker.com/r/otel/opentelemetry-collector-contrib/tags +[13]: https://opentelemetry.io/docs/collector/getting-started/#deployment +[14]: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/master/exporter/datadogexporter/example/example_k8s_manifest.yaml +[15]: https://github.com/open-telemetry/opentelemetry-collector/blob/master/docs/design.md#running-as-an-agent +[16]: https://github.com/open-telemetry/opentelemetry-collector/blob/master/docs/design.md#running-as-a-standalone-collector +[17]: https://github.com/open-telemetry/opentelemetry-collector/blob/master/docs/images/opentelemetry-service-deployment-models.png From 10e5daa6b3d3d996a0627947767377e4baf5ecb6 Mon Sep 17 00:00:00 2001 From: ericmustin Date: Wed, 27 Jan 2021 13:56:21 -0500 Subject: [PATCH 10/13] add grpc protocol to otel config example --- content/en/tracing/setup_overview/open_standards/_index.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/content/en/tracing/setup_overview/open_standards/_index.md b/content/en/tracing/setup_overview/open_standards/_index.md index 0c38bc4d8f4f6..3e880559dcac7 100644 --- a/content/en/tracing/setup_overview/open_standards/_index.md +++ b/content/en/tracing/setup_overview/open_standards/_index.md @@ -66,6 +66,9 @@ Here is an example trace pipeline configured with an `otlp` receiver, `batch` pr ``` receivers: otlp: + protocols: + grpc: + http: processors: batch: From 173f6518b3dacf0ba736960eebaa860e49758b2a Mon Sep 17 00:00:00 2001 From: Kari Halsted <12926135+kayayarai@users.noreply.github.com> Date: Wed, 3 Feb 2021 13:47:06 -0600 Subject: [PATCH 11/13] Apply suggestions from code review --- .../setup_overview/open_standards/_index.md | 24 ++++++++++--------- 1 file changed, 13 insertions(+), 11 deletions(-) diff --git a/content/en/tracing/setup_overview/open_standards/_index.md b/content/en/tracing/setup_overview/open_standards/_index.md index b1b493da8369c..4c58d927a76dd 100644 --- a/content/en/tracing/setup_overview/open_standards/_index.md +++ b/content/en/tracing/setup_overview/open_standards/_index.md @@ -106,9 +106,9 @@ service: 3. Run the download on the host, specifying the configration yaml file set via the `--config` parameter. For example: - ``` + ``` otelcontribcol_linux_amd64 --config otel_collector_config.yaml - ``` + ``` #### Docker @@ -122,21 +122,21 @@ Run an Opentelemetry Collector container to receive traces either from the [inst 3. Determine which ports to open on your container. OpenTelemetry traces are sent to the OpenTelemetry Collector over TCP or UDP on a number of ports, which must be exposed on the container. By default, traces are sent over OTLP/gRPC on port `55680`, but common protocols and their ports include: - - Zipkin/HTTP on port `9411` - - Jaeger/gRPC on port `14250` - - Jaeger/HTTP on port `14268` - - Jaeger/Compact on port (UDP) `6831` - - OTLP/gRPC on port `55680` - - OTLP/HTTP on port `55681` + - Zipkin/HTTP on port `9411` + - Jaeger/gRPC on port `14250` + - Jaeger/HTTP on port `14268` + - Jaeger/Compact on port (UDP) `6831` + - OTLP/gRPC on port `55680` + - OTLP/HTTP on port `55681` 4. Run the container with the configured ports and an `otel_collector_config.yaml` file. For example: - ``` + ``` $ docker run \ -p 55680:55680 \ -v $(pwd)/otel_collector_config.yaml:/etc/otel/config.yaml \ otel/opentelemetry-collector-contrib - ``` + ``` 5. Configure your application with the appropriate resource attributes for unified service tagging by [adding metadata](#opentelemetry-collector-datadog-exporter) @@ -149,7 +149,9 @@ Run an Opentelemetry Collector container to receive traces either from the [inst 3. Create a docker network: - `docker network create ` + ``` + docker network create + ``` 4. Run the OpenTelemetry Collector container and application container in the same network. **Note**: When running the application container, ensure the environment variable `OTEL_EXPORTER_OTLP_ENDPOINT` is configured to use the appropriate hostname for the OpenTelemetry Collector. In the example below, this is `opentelemetry-collector`. From 5eb870500c309e64df6d238ca2937724ce64c8cc Mon Sep 17 00:00:00 2001 From: ericmustin Date: Wed, 3 Feb 2021 15:52:31 -0500 Subject: [PATCH 12/13] [otel docs] moar indentation --- .../setup_overview/open_standards/_index.md | 36 +++++++++---------- 1 file changed, 18 insertions(+), 18 deletions(-) diff --git a/content/en/tracing/setup_overview/open_standards/_index.md b/content/en/tracing/setup_overview/open_standards/_index.md index 4c58d927a76dd..4fcf451bf7142 100644 --- a/content/en/tracing/setup_overview/open_standards/_index.md +++ b/content/en/tracing/setup_overview/open_standards/_index.md @@ -106,9 +106,9 @@ service: 3. Run the download on the host, specifying the configration yaml file set via the `--config` parameter. For example: - ``` - otelcontribcol_linux_amd64 --config otel_collector_config.yaml - ``` + ``` + otelcontribcol_linux_amd64 --config otel_collector_config.yaml + ``` #### Docker @@ -122,21 +122,21 @@ Run an Opentelemetry Collector container to receive traces either from the [inst 3. Determine which ports to open on your container. OpenTelemetry traces are sent to the OpenTelemetry Collector over TCP or UDP on a number of ports, which must be exposed on the container. By default, traces are sent over OTLP/gRPC on port `55680`, but common protocols and their ports include: - - Zipkin/HTTP on port `9411` - - Jaeger/gRPC on port `14250` - - Jaeger/HTTP on port `14268` - - Jaeger/Compact on port (UDP) `6831` - - OTLP/gRPC on port `55680` - - OTLP/HTTP on port `55681` + - Zipkin/HTTP on port `9411` + - Jaeger/gRPC on port `14250` + - Jaeger/HTTP on port `14268` + - Jaeger/Compact on port (UDP) `6831` + - OTLP/gRPC on port `55680` + - OTLP/HTTP on port `55681` 4. Run the container with the configured ports and an `otel_collector_config.yaml` file. For example: - ``` - $ docker run \ - -p 55680:55680 \ - -v $(pwd)/otel_collector_config.yaml:/etc/otel/config.yaml \ - otel/opentelemetry-collector-contrib - ``` + ``` + $ docker run \ + -p 55680:55680 \ + -v $(pwd)/otel_collector_config.yaml:/etc/otel/config.yaml \ + otel/opentelemetry-collector-contrib + ``` 5. Configure your application with the appropriate resource attributes for unified service tagging by [adding metadata](#opentelemetry-collector-datadog-exporter) @@ -149,9 +149,9 @@ Run an Opentelemetry Collector container to receive traces either from the [inst 3. Create a docker network: - ``` - docker network create - ``` + ``` + docker network create + ``` 4. Run the OpenTelemetry Collector container and application container in the same network. **Note**: When running the application container, ensure the environment variable `OTEL_EXPORTER_OTLP_ENDPOINT` is configured to use the appropriate hostname for the OpenTelemetry Collector. In the example below, this is `opentelemetry-collector`. From 43eac635b18bc358cb1158f1794589fab243342b Mon Sep 17 00:00:00 2001 From: Kari Halsted <12926135+kayayarai@users.noreply.github.com> Date: Wed, 3 Feb 2021 14:55:28 -0600 Subject: [PATCH 13/13] fix indent --- .../setup_overview/open_standards/_index.md | 43 +++++++++---------- 1 file changed, 21 insertions(+), 22 deletions(-) diff --git a/content/en/tracing/setup_overview/open_standards/_index.md b/content/en/tracing/setup_overview/open_standards/_index.md index 4c58d927a76dd..ba437705c1bc8 100644 --- a/content/en/tracing/setup_overview/open_standards/_index.md +++ b/content/en/tracing/setup_overview/open_standards/_index.md @@ -150,28 +150,28 @@ Run an Opentelemetry Collector container to receive traces either from the [inst 3. Create a docker network: ``` - docker network create + docker network create ``` 4. Run the OpenTelemetry Collector container and application container in the same network. **Note**: When running the application container, ensure the environment variable `OTEL_EXPORTER_OTLP_ENDPOINT` is configured to use the appropriate hostname for the OpenTelemetry Collector. In the example below, this is `opentelemetry-collector`. - ``` - # Datadog Agent - docker run -d --name opentelemetry-collector \ - --network \ - -v $(pwd)/otel_collector_config.yaml:/etc/otel/config.yaml \ - otel/opentelemetry-collector-contrib - - # Application - docker run -d --name app \ - --network \ - -e OTEL_EXPORTER_OTLP_ENDPOINT=http://opentelemetry-collector:55680 \ - company/app:latest - ``` + ``` + # Datadog Agent + docker run -d --name opentelemetry-collector \ + --network \ + -v $(pwd)/otel_collector_config.yaml:/etc/otel/config.yaml \ + otel/opentelemetry-collector-contrib + + # Application + docker run -d --name app \ + --network \ + -e OTEL_EXPORTER_OTLP_ENDPOINT=http://opentelemetry-collector:55680 \ + company/app:latest + ``` #### Kubernetes -The OpenTelemetry Collector can be run in two types of [deployment scenarios][13]: +The OpenTelemetry Collector can be run in two types of [deployment scenarios][4]: - As an OpenTelemetry Collector agent running on the same host as the application in a sidecar or daemonset; or @@ -185,9 +185,9 @@ On the application container, use the downward API to pull the host IP. The appl ##### Example Kubernetes OpenTelemetry Collector configuration -A full example Kubernetes manifest for deploying the OpenTelemetry Collector as both daemonset and standalone collector [can be found here][14]. Modify the example to suit your environment. The key sections that are specific to Datadog are as follows: +A full example Kubernetes manifest for deploying the OpenTelemetry Collector as both daemonset and standalone collector [can be found here][13]. Modify the example to suit your environment. The key sections that are specific to Datadog are as follows: -1. The example demonstrates deploying the OpenTelemetry Collectors in [agent mode via daemonset][15], which collect relevant k8s node and pod specific metadata, and then forward telemetry data to an OpenTelemetry Collector in [standalone collector mode][16]. This OpenTelemetry Collector in standalone collector mode then exports to the Datadog backend. See [this diagram of this deployment model][17]. +1. The example demonstrates deploying the OpenTelemetry Collectors in [agent mode via daemonset][14], which collect relevant k8s node and pod specific metadata, and then forward telemetry data to an OpenTelemetry Collector in [standalone collector mode][15]. This OpenTelemetry Collector in standalone collector mode then exports to the Datadog backend. See [this diagram of this deployment model][16]. 2. For OpenTelemetry Collectors deployed as agent via daemonset, in the daemonset, `spec.containers.env` should use the downward API to capture `status.podIP` and add it as part of the `OTEL_RESOURCE` environment variable. This is used by the OpenTelemetry Collector's `resourcedetection` and `k8s_tagger` processors, which should be included along with a `batch` processor and added to the `traces` pipeline. @@ -306,8 +306,7 @@ To see more information and additional examples of how you might configure your [10]: https://github.com/open-telemetry/opentelemetry-collector/tree/master/processor/batchprocessor#batch-processor [11]: https://github.com/open-telemetry/opentelemetry-collector-contrib/releases/latest [12]: https://hub.docker.com/r/otel/opentelemetry-collector-contrib/tags -[13]: https://opentelemetry.io/docs/collector/getting-started/#deployment -[14]: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/master/exporter/datadogexporter/example/example_k8s_manifest.yaml -[15]: https://github.com/open-telemetry/opentelemetry-collector/blob/master/docs/design.md#running-as-an-agent -[16]: https://github.com/open-telemetry/opentelemetry-collector/blob/master/docs/design.md#running-as-a-standalone-collector -[17]: https://github.com/open-telemetry/opentelemetry-collector/blob/master/docs/images/opentelemetry-service-deployment-models.png +[13]: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/master/exporter/datadogexporter/example/example_k8s_manifest.yaml +[14]: https://github.com/open-telemetry/opentelemetry-collector/blob/master/docs/design.md#running-as-an-agent +[15]: https://github.com/open-telemetry/opentelemetry-collector/blob/master/docs/design.md#running-as-a-standalone-collector +[16]: https://github.com/open-telemetry/opentelemetry-collector/blob/master/docs/images/opentelemetry-service-deployment-models.png