Status | |
---|---|
Stability | beta: traces, logs |
Distributions | contrib |
Issues | |
Code Owners | @JaredTan95, @ycombinator, @carsonip |
This exporter supports sending OpenTelemetry logs and traces to Elasticsearch.
Exactly one of the following settings is required:
endpoint
(no default): The target Elasticsearch URL to which data will be sent (e.g.https://elasticsearch:9200
)endpoints
(no default): A list of Elasticsearch URLs to which data will be sent, attempted in round-robin ordercloudid
(no default): The Elastic Cloud ID of the Elastic Cloud Cluster to which data will be sent (e.g.foo:YmFyLmNsb3VkLmVzLmlvJGFiYzEyMyRkZWY0NTY=
)
When the above settings are missing, endpoints
will default to the
comma-separated ELASTICSEARCH_URL
environment variable.
Elasticsearch credentials may be configured via Authentication configuration settings. As a shortcut, the following settings are also supported:
user
(optional): Username used for HTTP Basic Authentication.password
(optional): Password used for HTTP Basic Authentication.api_key
(optional): Elasticsearch API Key in "encoded" format.
Example:
exporters:
elasticsearch:
endpoint: https://elastic.example.com:9200
auth:
authenticator: basicauth
extensions:
basicauth:
username: elastic
password: changeme
······
service:
extensions: [basicauth]
pipelines:
logs:
receivers: [otlp]
processors: [batch]
exporters: [elasticsearch]
traces:
receivers: [otlp]
processors: [batch]
exporters: [elasticsearch]
The Elasticsearch exporter supports common HTTP Configuration Settings, except for compression
(all requests are uncompressed).
As a consequence of supporting confighttp, the Elasticsearch exporter also supports common TLS Configuration Settings.
The Elasticsearch exporter sets timeout
(HTTP request timeout) to 90s by default.
All other defaults are as defined by confighttp.
The Elasticsearch exporter supports the common sending_queue
settings. However, the sending queue is currently disabled by default.
Telemetry data will be written to signal specific data streams by default:
logs to logs-generic-default
, and traces to traces-generic-default
.
This can be customised through the following settings:
index
(DEPRECATED, please uselogs_index
for logs,traces_index
for traces): The index or data stream name to publish events to. The default value islogs-generic-default
.logs_index
: The index or data stream name to publish events to. The default value islogs-generic-default
logs_dynamic_index
(optional): takes resource or log record attribute namedelasticsearch.index.prefix
andelasticsearch.index.suffix
resulting dynamically prefixed / suffixed indexing based onlogs_index
. (priority: resource attribute > log record attribute)enabled
(default=false): Enable/Disable dynamic index for log records
traces_index
: The index or data stream name to publish traces to. The default value istraces-generic-default
.traces_dynamic_index
(optional): takes resource or span attribute namedelasticsearch.index.prefix
andelasticsearch.index.suffix
resulting dynamically prefixed / suffixed indexing based ontraces_index
. (priority: resource attribute > span attribute)enabled
(default=false): Enable/Disable dynamic index for trace spans
logstash_format
(optional): Logstash format compatibility. Traces or Logs data can be written into an index in logstash format.enabled
(default=false): Enable/Disable Logstash format compatibility. Whenlogstash_format.enabled
istrue
, the index name is composed usingtraces/logs_index
ortraces/logs_dynamic_index
as prefix and the date, e.g: Iftraces/logs_index
ortraces/logs_dynamic_index
is equals tootlp-generic-default
your index will becomeotlp-generic-default-YYYY.MM.DD
. The last string appended belongs to the date when the data is being generated.prefix_separator
(default=-
): Set a separator between logstash_prefix and date.date_format
(default=%Y.%m.%d
): Time format (based on strftime) to generate the second part of the Index name.
The Elasticsearch exporter supports several document schemas and preprocessing behaviours, which may be configured throug the following settings:
mapping
: Events are encoded to JSON. Themapping
allows users to configure additional mapping rules.mode
(default=none): The fields naming mode. valid modes are:none
: Use original fields and event structure from the OTLP event.ecs
: Try to map fields to Elastic Common Schema (ECS)raw
: Omit theAttributes.
string prefixed to field names for log and span attributes as well as omit theEvents.
string prefixed to field names for span events.
fields
(optional): Configure additional fields mappings.file
(optional): Read additional field mappings from the provided YAML file.dedup
(default=true): Try to find and remove duplicate fields/attributes from events before publishing to Elasticsearch. Some structured logging libraries can produce duplicate fields (for example zap). Elasticsearch will reject documents that have duplicate fields.dedot
(default=true): When enabled attributes with.
will be split into proper json objects.
Warning
The ECS mode mapping mode is currently undergoing changes, and its behaviour is unstable.
In ECS mapping mode, the Elastisearch Exporter attempts to map fields from OpenTelemetry Semantic Conventions (version 1.22.0) to Elastic Common Schema. This mode may be used for compatibility with existing dashboards that work with with ECS.
Documents may be optionally passed through an Elasticsearch Ingest pipeline prior to indexing. This can be configured through the following settings:
pipeline
(optional): ID of an Elasticsearch Ingest pipeline used for processing documents published by the exporter.
The Elasticsearch exporter uses the Elasticsearch Bulk API for indexing documents. The behaviour of this bulk indexing can be configured with the following settings:
num_workers
(default=runtime.NumCPU()): Number of workers publishing bulk requests concurrently.flush
: Event bulk indexer buffer flush settingsbytes
(default=5000000): Write buffer flush size limit.interval
(default=30s): Write buffer flush time limit.
retry
: Elasticsearch bulk request retry settingsenabled
(default=true): Enable/Disable request retry on error. Failed requests are retried with exponential backoff.max_requests
(default=3): Number of HTTP request retries.initial_interval
(default=100ms): Initial waiting time if a HTTP request failed.max_interval
(default=1m): Max waiting time if a HTTP request failed.retry_on_status
(default=[429, 500, 502, 503, 504]): Status codes that trigger request or document level retries. Request level retry and document level retry status codes are shared and cannot be configured separately. To avoid duplicates, it is recommended to set it to[429]
. WARNING: The default will be changed to[429]
in the future.
The Elasticsearch Exporter will regularly check Elasticsearch for available nodes. Newly discovered nodes will automatically be used for load balancing. Settings related to node discovery are:
discover
:on_start
(optional): If enabled the exporter queries Elasticsearch for all known nodes in the cluster on startup.interval
(optional): Interval to update the list of Elasticsearch nodes.
Node discovery can be disabled by setting discover.interval
to 0.