Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add opentelemetry env specific setup #9529

Merged
merged 17 commits into from
Feb 3, 2021
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
205 changes: 203 additions & 2 deletions content/en/tracing/setup_overview/open_standards/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,9 +49,9 @@ On each OpenTelemetry-instrumented application, set the resource attributes `dev
1. Fully qualified domain name
1. Operating system host name

### Ingesting OpenTelemetry Traces with the Collector
### Ingesting OpenTelemetry traces with the collector

The OpenTelemetry Collector is configured by adding a [pipeline][8] to your `otel-collector-configuration.yml` file. Supply the relative path to this configuration file when you start the collector by passing it in via the `--config=<path/to/configuration_file>` command line argument. For examples of supplying a configuration file, see the [OpenTelemetry Collector documentation][9].
The OpenTelemetry Collector is configured by adding a [pipeline][8] to your `otel-collector-configuration.yml` file. Supply the relative path to this configuration file when you start the collector by passing it in via the `--config=<path/to/configuration_file>` command line argument. For examples of supplying a configuration file, see the [environment specific setup](#environment-specific-setup) section below or the [OpenTelemetry Collector documentation][9].

The exporter assumes you have a pipeline that uses the `datadog` exporter, and includes a [batch processor][10] configured with the following:
- A required `timeout` setting of `10s` (10 seconds). A batch representing 10 seconds of traces is a constraint of Datadog's API Intake for Trace Related Statistics.
Expand All @@ -66,6 +66,9 @@ Here is an example trace pipeline configured with an `otlp` receiver, `batch` pr
```
receivers:
otlp:
protocols:
grpc:
http:

processors:
batch:
Expand Down Expand Up @@ -93,6 +96,198 @@ service:
exporters: [datadog/api]
```

### Environment specific setup

#### Host

1. Download the appropriate binary from [the project repository latest release][11].

2. Create a `otel_collector_config.yaml` file. [Here is an example template](#ingesting-opentelemetry-traces-with-the-collector) to get started. It enables the collector's OTLP Receiver and Datadog Exporter.

3. Run the download on the host, specifying the configration yaml file set via the `--config` parameter. For example:

```
otelcontribcol_linux_amd64 --config otel_collector_config.yaml
```

#### Docker
Copy link
Member

@mx-psi mx-psi Jan 21, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO we can mostly get rid of this section and just link here instead: https://opentelemetry.io/docs/collector/getting-started/#docker, making clear that we are present in the contrib flavor.

I don't think we should explain how to use Docker or how to use the OpenTelemetry Collector in general in Datadog docs, we should keep this for Datadog exporter specific configuration

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These sections are quite misleading (note the kill $pid1; docker stop otelcol at the end of the docker command, for example) and have the user cloning the repo locally. i think it's better to have working examples in our docs and then contrib upstream to improve the upstream docs, and when those are improved we can point to them directly.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense, let's keep it then. Can you open an issue for this on the https://github.com/open-telemetry/opentelemetry.io repo?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


Run an Opentelemetry Collector container to receive traces either from the [installed host](#receive-traces-from-host), or from [other containers](#receive-traces-from-other-containers).

##### Receive traces from host

1. Create a `otel_collector_config.yaml` file. [Here is an example template](#ingesting-opentelemetry-traces-with-the-collector) to get started. It enables the collector's OTLP receiver and the Datadog exporter.

2. Choose a published Docker image such as [`otel/opentelemetry-collector-contrib:latest`][12].

3. Determine which ports to open on your container. OpenTelemetry traces are sent to the OpenTelemetry Collector over TCP or UDP on a number of ports, which must be exposed on the container. By default, traces are sent over OTLP/gRPC on port `55680`, but common protocols and their ports include:

- Zipkin/HTTP on port `9411`
- Jaeger/gRPC on port `14250`
- Jaeger/HTTP on port `14268`
- Jaeger/Compact on port (UDP) `6831`
- OTLP/gRPC on port `55680`
- OTLP/HTTP on port `55681`

4. Run the container with the configured ports and an `otel_collector_config.yaml` file. For example:

```
$ docker run \
-p 55680:55680 \
-v $(pwd)/otel_collector_config.yaml:/etc/otel/config.yaml \
otel/opentelemetry-collector-contrib
```

5. Configure your application with the appropriate resource attributes for unified service tagging by [adding metadata](#opentelemetry-collector-datadog-exporter)

##### Receive traces from other containers

1. Create an `otel_collector_config.yaml` file. [Here is an example template](#ingesting-opentelemetry-traces-with-the-collector) to get started. It enables the collector's OTLP receiver and Datadog exporter.


2. Configure your application with the appropriate resource attributes for unified service tagging by adding the metadata [described here](#opentelemetry-collector-datadog-exporter)

3. Create a docker network:

```
docker network create <NETWORK_NAME>
```

4. Run the OpenTelemetry Collector container and application container in the same network. **Note**: When running the application container, ensure the environment variable `OTEL_EXPORTER_OTLP_ENDPOINT` is configured to use the appropriate hostname for the OpenTelemetry Collector. In the example below, this is `opentelemetry-collector`.

```
# Datadog Agent
docker run -d --name opentelemetry-collector \
--network <NETWORK_NAME> \
-v $(pwd)/otel_collector_config.yaml:/etc/otel/config.yaml \
otel/opentelemetry-collector-contrib

# Application
docker run -d --name app \
--network <NETWORK_NAME> \
-e OTEL_EXPORTER_OTLP_ENDPOINT=http://opentelemetry-collector:55680 \
company/app:latest
```

#### Kubernetes

The OpenTelemetry Collector can be run in two types of [deployment scenarios][4]:

- As an OpenTelemetry Collector agent running on the same host as the application in a sidecar or daemonset; or

- As a standalone service, for example a container or deployment, typically per-cluster, per-datacenter, or per-region.

To accurately track the appropriate metadata in Datadog, run the OpenTelemetry Collector in agent mode on each of the Kubernetes nodes.

When deploying the OpenTelemetry Collector as a daemonset, refer to [the example configuration below](#opentelemetry-kubernetes-example-collector-configuration) as a guide.

kayayarai marked this conversation as resolved.
Show resolved Hide resolved
On the application container, use the downward API to pull the host IP. The application container needs an environment variable that points to `status.hostIP`. The OpenTelemetry Application SDKs expect this to be named `OTEL_EXPORTER_OTLP_ENDPOINT`. Use the [below example snippet](#opentelemetry-kubernetes-example-application-configuration) as a guide.

##### Example Kubernetes OpenTelemetry Collector configuration

A full example Kubernetes manifest for deploying the OpenTelemetry Collector as both daemonset and standalone collector [can be found here][13]. Modify the example to suit your environment. The key sections that are specific to Datadog are as follows:

1. The example demonstrates deploying the OpenTelemetry Collectors in [agent mode via daemonset][14], which collect relevant k8s node and pod specific metadata, and then forward telemetry data to an OpenTelemetry Collector in [standalone collector mode][15]. This OpenTelemetry Collector in standalone collector mode then exports to the Datadog backend. See [this diagram of this deployment model][16].

2. For OpenTelemetry Collectors deployed as agent via daemonset, in the daemonset, `spec.containers.env` should use the downward API to capture `status.podIP` and add it as part of the `OTEL_RESOURCE` environment variable. This is used by the OpenTelemetry Collector's `resourcedetection` and `k8s_tagger` processors, which should be included along with a `batch` processor and added to the `traces` pipeline.

In the daemonset's `spec.containers.env` section:

```yaml
# ...
env:
# Get pod ip so that k8s_tagger can tag resources
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
# This is picked up by the resource detector
- name: OTEL_RESOURCE
value: "k8s.pod.ip=$(POD_IP)"
# ...
```

In the `otel-agent-conf` ConfigMap's `data.otel-agent-config` `processors` section:

```yaml
# ...
# The resource detector injects the pod IP
# to every metric so that the k8s_tagger can
# fetch information afterwards.
resourcedetection:
detectors: [env]
timeout: 5s
override: false
# The k8s_tagger in the Agent is in passthrough mode
# so that it only tags with the minimal info for the
# collector k8s_tagger to complete
k8s_tagger:
passthrough: true
batch:
# ...
```

In the `otel-agent-conf` ConfigMap's `data.otel-agent-config` `service.pipelines.traces` section:

```yaml
# ...
# resourcedetection must come before k8s_tagger
processors: [resourcedetection, k8s_tagger, batch]
# ...
```

3. For OpenTelemetry Collectors in standalone collector mode, which receive traces from downstream collectors and export to Datadog's backend, include a `batch` processor configured with a `timeout` of `10s` as well as the `k8s_tagger` enabled. These should be included along with the `datadog` exporter and added to the `traces` pipeline.

In the `otel-collector-conf` ConfigMap's `data.otel-collector-config` `processors` section:

```yaml
# ...
batch:
timeout: 10s
k8s_tagger:
# ...
```

In the `otel-collector-conf` ConfigMap's `data.otel-collector-config` `exporters` section:

```yaml
exporters:
datadog:
api:
key: <YOUR_API_KEY>
```

In the `otel-collector-conf` ConfigMap's `data.otel-collector-config` `service.pipelines.traces` section:

```yaml
# ...
processors: [k8s_tagger, batch]
exporters: [datadog]
# ...
```

##### Example Kubernetes OpenTelemetry application configuration

In addition to the OpenTelemetry Collector configuration, ensure that OpenTelemetry SDKs that are installed in an application transmit telemetry data to the collector, by configuring the environment variable `OTEL_EXPORTER_OTLP_ENDPOINT` with the host IP. Use the downward API to pull the host IP, and set it as an environment variable, which is then interpolated when setting the `OTEL_EXPORTER_OTLP_ENDPOINT` environment variable:

```
apiVersion: apps/v1
kind: Deployment
...
spec:
containers:
- name: <CONTAINER_NAME>
image: <CONTAINER_IMAGE>/<TAG>
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
# This is picked up by the opentelemetry sdks
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: "http://$(HOST_IP):55680"
```

To see more information and additional examples of how you might configure your collector, see [the OpenTelemetry Collector configuration documentation][5].

## Further Reading
Expand All @@ -109,3 +304,9 @@ To see more information and additional examples of how you might configure your
[8]: https://github.com/open-telemetry/opentelemetry-collector/blob/master/docs/design.md#pipelines
[9]: https://github.com/open-telemetry/opentelemetry-collector/tree/master/examples
[10]: https://github.com/open-telemetry/opentelemetry-collector/tree/master/processor/batchprocessor#batch-processor
[11]: https://github.com/open-telemetry/opentelemetry-collector-contrib/releases/latest
[12]: https://hub.docker.com/r/otel/opentelemetry-collector-contrib/tags
[13]: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/master/exporter/datadogexporter/example/example_k8s_manifest.yaml
[14]: https://github.com/open-telemetry/opentelemetry-collector/blob/master/docs/design.md#running-as-an-agent
[15]: https://github.com/open-telemetry/opentelemetry-collector/blob/master/docs/design.md#running-as-a-standalone-collector
[16]: https://github.com/open-telemetry/opentelemetry-collector/blob/master/docs/images/opentelemetry-service-deployment-models.png