Status | |
---|---|
Stability | development: profiles |
beta: traces, metrics, logs | |
Distributions | contrib, k8s |
Issues | |
Code Owners | @Aneurysm9, @dashpole |
The resource detection processor can be used to detect resource information from the host, in a format that conforms to the OpenTelemetry resource semantic conventions, and append or override the resource value in telemetry data with this information.
Reads resource information from the OTEL_RESOURCE_ATTRIBUTES
environment
variable. This is expected to be in the format <key1>=<value1>,<key2>=<value2>,...
, the
details of which are currently pending confirmation in the OpenTelemetry specification.
Example:
processors:
resourcedetection/env:
detectors: [env]
timeout: 2s
override: false
Note: use the Docker detector (see below) if running the Collector as a Docker container.
Queries the host machine to retrieve the system related resource attributes. The list of the populated resource attributes can be found at System Detector Resource Attributes.
By default host.name
is being set to FQDN if possible, and a hostname provided by OS used as fallback.
This logic can be changed with hostname_sources
configuration which is set to ["dns", "os"]
by default.
Use the following config to avoid getting FQDN and apply hostname provided by OS only:
processors:
resourcedetection/system:
detectors: ["system"]
system:
hostname_sources: ["os"]
- all valid options for
hostname_sources
:- "dns"
- "os"
- "cname"
- "lookup"
The "dns" hostname source uses multiple sources to get the fully qualified domain name. First, it looks up the
host name in the local machine's hosts
file. If that fails, it looks up the CNAME. Lastly, if that fails,
it does a reverse DNS query. Note: this hostname source may produce unreliable results on Windows. To produce
a FQDN, Windows hosts might have better results using the "lookup" hostname source, which is mentioned below.
The "os" hostname source provides the hostname provided by the local machine's kernel.
The "cname" hostname source provides the canonical name, as provided by net.LookupCNAME in the Go standard library. Note: this hostname source may produce unreliable results on Windows.
The "lookup" hostname source does a reverse DNS lookup of the current host's IP address.
Queries the Docker daemon to retrieve resource attributes from the host machine. The list of the populated resource attributes can be found at Docker Detector Resource Attributes.
You need to mount the Docker socket (/var/run/docker.sock
on Linux) to contact the Docker daemon.
Docker detection does not work on macOS.
Example:
processors:
resourcedetection/docker:
detectors: [env, docker]
timeout: 2s
override: false
When Heroku dyno metadata is active, Heroku applications publish information through environment variables.
We map these environment variables to resource attributes as follows:
Dyno metadata environment variable | Resource attribute |
---|---|
HEROKU_APP_ID |
heroku.app.id |
HEROKU_APP_NAME |
service.name |
HEROKU_DYNO_ID |
service.instance.id |
HEROKU_RELEASE_CREATED_AT |
heroku.release.creation_timestamp |
HEROKU_RELEASE_VERSION |
service.version |
HEROKU_SLUG_COMMIT |
heroku.release.commit |
For more information, see the Heroku cloud provider documentation under the OpenTelemetry specification semantic conventions.
The list of the populated resource attributes can be found at Heroku Detector Resource Attributes.
processors:
resourcedetection/heroku:
detectors: [env, heroku]
timeout: 2s
override: false
Uses the Google Cloud Client Libraries for Go to read resource information from the metadata server and environment variables to detect which GCP platform the application is running on, and detect the appropriate attributes for that platform. Regardless of the GCP platform the application is running on, use the gcp detector:
Example:
processors:
resourcedetection/gcp:
detectors: [env, gcp]
timeout: 2s
override: false
The list of the populated resource attributes can be found at GCP Detector Resource Attributes.
* cloud.provider ("gcp")
* cloud.platform ("gcp_compute_engine")
* cloud.account.id (project id)
* cloud.region (e.g. us-central1)
* cloud.availability_zone (e.g. us-central1-c)
* host.id (instance id)
* host.name (instance name)
* host.type (machine type)
* (optional) gcp.gce.instance.hostname
* (optional) gcp.gce.instance.name
* cloud.provider ("gcp")
* cloud.platform ("gcp_kubernetes_engine")
* cloud.account.id (project id)
* cloud.region (only for regional GKE clusters; e.g. "us-central1")
* cloud.availability_zone (only for zonal GKE clusters; e.g. "us-central1-c")
* k8s.cluster.name
* host.id (instance id)
* host.name (instance name; only when workload identity is disabled)
One known issue is when GKE workload identity is enabled, the GCE metadata endpoints won't be available, thus the GKE resource detector won't be
able to determine host.name
. In that case, users are encouraged to set host.name
from either:
node.name
through the downward API with theenv
detector- obtaining the Kubernetes node name from the Kubernetes API (with
k8s.io/client-go
)
* cloud.provider ("gcp")
* cloud.platform ("gcp_cloud_run")
* cloud.account.id (project id)
* cloud.region (e.g. "us-central1")
* faas.id (instance id)
* faas.name (service name)
* faas.version (service revision)
* cloud.provider ("gcp")
* cloud.platform ("gcp_cloud_run")
* cloud.account.id (project id)
* cloud.region (e.g. "us-central1")
* faas.id (instance id)
* faas.name (service name)
* gcp.cloud_run.job.execution ("my-service-ajg89")
* gcp.cloud_run.job.task_index ("0")
* cloud.provider ("gcp")
* cloud.platform ("gcp_cloud_functions")
* cloud.account.id (project id)
* cloud.region (e.g. "us-central1")
* faas.id (instance id)
* faas.name (function name)
* faas.version (function version)
* cloud.provider ("gcp")
* cloud.platform ("gcp_app_engine")
* cloud.account.id (project id)
* cloud.region (e.g. "us-central1")
* cloud.availability_zone (e.g. "us-central1-c")
* faas.id (instance id)
* faas.name (service name)
* faas.version (service version)
Uses AWS SDK for Go to read resource information from the EC2 instance metadata API to retrieve related resource attributes:
The list of the populated resource attributes can be found at EC2 Detector Resource Attributes.
It also can optionally gather tags for the EC2 instance that the collector is running on.
Note that in order to fetch EC2 tags, the IAM role assigned to the EC2 instance must have a policy that includes the ec2:DescribeTags
permission.
EC2 custom configuration example:
processors:
resourcedetection/ec2:
detectors: ["ec2"]
ec2:
# A list of regex's to match tag keys to add as resource attributes can be specified
tags:
- ^tag1$
- ^tag2$
- ^label.*$
If you are using a proxy server on your EC2 instance, it's important that you exempt requests for instance metadata as described in the AWS cli user guide. Failing to do so can result in proxied or missing instance data.
If the instance is part of AWS ParallelCluster and the detector is failing to connect to the metadata server, check the iptable and make sure the chain PARALLELCLUSTER_IMDS
contains a rule that allows OTEL user to access 169.254.169.254/32
Queries the Task Metadata Endpoint (TMDE) to record information about the current ECS Task. Only TMDE V4 and V3 are supported.
The list of the populated resource attributes can be found at ECS Detector Resource Attributes.
Example:
processors:
resourcedetection/ecs:
detectors: [env, ecs]
timeout: 2s
override: false
Reads the AWS X-Ray configuration file available on all Beanstalk instances with X-Ray Enabled.
The list of the populated resource attributes can be found at Elastic Beanstalk Detector Resource Attributes.
Example:
processors:
resourcedetection/elastic_beanstalk:
detectors: [env, elastic_beanstalk]
timeout: 2s
override: false
The list of the populated resource attributes can be found at EKS Detector Resource Attributes.
Example:
processors:
resourcedetection/eks:
detectors: [env, eks]
timeout: 15s
override: false
Cluster name detection is disabled by default, and can be enabled with the following configuration:
processors:
resourcedetection/eks:
detectors: [env, eks]
timeout: 15s
override: false
eks:
resource_attributes:
k8s.cluster.name:
enabled: true
Note: The kubernetes cluster name is only available when running on EC2 instances, and requires permission to run the EC2:DescribeInstances
action.
If you see an error with the message context deadline exceeded
, please increase the timeout setting in your config.
Uses the AWS Lambda runtime environment variables to retrieve related resource attributes.
The list of the populated resource attributes can be found at Lambda Detector Resource Attributes.
Example:
processors:
resourcedetection/lambda:
detectors: [env, lambda]
timeout: 0.2s
override: false
Queries the Azure Instance Metadata Service to retrieve related attributes.
The list of the populated resource attributes can be found at Azure Detector Resource Attributes.
Example:
processors:
resourcedetection/azure:
detectors: [env, azure]
timeout: 2s
override: false
It also can optionally gather tags from the Azure instance that the Collector is running on.
Azure custom configuration example:
processors:
resourcedetection/azure:
detectors: ["azure"]
azure:
# A list of regex's to match tag keys to add as resource attributes can be specified
tags:
- ^tag1$
- ^tag2$
- ^label.*$
Matched tags are added as:
* azure.tags.<tag name>
The list of the populated resource attributes can be found at AKS Detector Resource Attributes.
processors:
resourcedetection/aks:
detectors: [env, aks]
timeout: 2s
override: false
Cluster name detection is disabled by default, and can be enabled with the following configuration:
processors:
resourcedetection/aks:
detectors: [aks]
timeout: 2s
override: false
aks:
resource_attributes:
k8s.cluster.name:
enabled: true
Azure AKS cluster name is derived from the Azure Instance Metadata Service's (IMDS) infrastructure resource group field. This field contains the resource group and name of the cluster, separated by underscores. e.g: MC_<resource group>_<cluster name>_<location>
.
Example:
- Resource group: my-resource-group
- Cluster name: my-cluster
- Location: eastus
- Generated name: MC_my-resource-group_my-cluster_eastus
The cluster name is detected if it does not contain underscores and if a custom infrastructure resource group name was not used.
If accurate parsing cannot be performed, the infrastructure resource group value is returned. This value can be used to uniquely identify the cluster, as Azure will not allow users to create multiple clusters with the same infrastructure resource group name.
Queries a consul agent and reads its' configuration endpoint to retrieve related resource attributes:
The list of the populated resource attributes can be found at Consul Detector Resource Attributes.
In addition to:
- exploded consul metadata - reads all key:value pairs in consul metadata into label:labelvalue pairs.
processors:
resourcedetection/consul:
detectors: [env, consul]
timeout: 2s
override: false
Queries the K8S api server to retrieve node resource attributes.
The list of the populated resource attributes can be found at k8snode Detector Resource Attributes.
The following permissions are required:
kind: ClusterRole
metadata:
name: otel-collector
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list"]
Name | Type | Required | Default | Docs |
---|---|---|---|---|
auth_type | string | No | serviceAccount |
How to authenticate to the K8s API server. This can be one of none (for no auth), serviceAccount (to use the standard service account token provided to the agent pod), or kubeConfig to use credentials from ~/.kube/config . |
node_from_env_var | string | Yes | K8S_NODE_NAME |
The environment variable name that holds the name of the node to retrieve metadata from. Default value is K8S_NODE_NAME . You can set the env dynamically on the workload definition using the downward API; see example |
processors:
resourcedetection/k8snode:
detectors: [k8snode]
and add this to your workload:
env:
- name: K8S_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
processors:
resourcedetection/k8snode:
detectors: [k8snode]
k8snode:
node_from_env_var: "my_custom_var"
and add this to your workload:
env:
- name: my_custom_var
valueFrom:
fieldRef:
fieldPath: spec.nodeName
Queries the OpenShift and Kubernetes API to retrieve related resource attributes.
The list of the populated resource attributes can be found at Openshift Detector Resource Attributes.
The following permissions are required:
kind: ClusterRole
metadata:
name: otel-collector
rules:
- apiGroups: ["config.openshift.io"]
resources: ["infrastructures", "infrastructures/status"]
verbs: ["get", "watch", "list"]
By default, the API address is determined from the environment variables KUBERNETES_SERVICE_HOST
, KUBERNETES_SERVICE_PORT
and the service token is read from /var/run/secrets/kubernetes.io/serviceaccount/token
.
If TLS is not explicit disabled and no ca_file
is configured /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
is used.
The determination of the API address, ca_file and the service token is skipped if they are set in the configuration.
Example:
processors:
resourcedetection/openshift:
detectors: [openshift]
timeout: 2s
override: false
openshift: # optional
address: "https://api.example.com"
token: "token"
tls:
insecure: false
ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"
See: TLS Configuration Settings for the full set of available options.
# a list of resource detectors to run, valid options are: "env", "system", "gcp", "ec2", "ecs", "elastic_beanstalk", "eks", "lambda", "azure", "heroku", "openshift"
detectors: [ <string> ]
# determines if existing resource attributes should be overridden or preserved, defaults to true
override: <bool>
# [DEPRECATED] When included, only attributes in the list will be appended. Applies to all detectors.
attributes: [ <string> ]
Moreover, you have the ability to specify which detector should collect each attribute with resource_attributes
option. An example of such a configuration is:
resourcedetection:
detectors: [system, ec2]
system:
resource_attributes:
host.name:
enabled: true
host.id:
enabled: false
ec2:
resource_attributes:
host.name:
enabled: false
host.id:
enabled: true
The attributes
option is deprecated and will be removed soon, from now on you should enable/disable attributes through resource_attributes
.
For example, this config:
resourcedetection:
detectors: [system]
attributes: ['host.name', 'host.id']
can be replaced with:
resourcedetection:
detectors: [system]
system:
resource_attributes:
host.name:
enabled: true
host.id:
enabled: true
os.type:
enabled: false
Note that if multiple detectors are inserting the same attribute name, the first detector to insert wins. For example if you had detectors: [eks, ec2]
then cloud.platform
will be aws_eks
instead of ec2
. The below ordering is recommended.
- lambda
- elastic_beanstalk
- eks
- ecs
- ec2
The full list of settings exposed for this extension are documented here with detailed sample configurations here.