Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Elasticsearch exporter does not set Timestamp (logs) #11752

Closed
jsirianni opened this issue Jun 27, 2022 · 9 comments
Closed

Elasticsearch exporter does not set Timestamp (logs) #11752

jsirianni opened this issue Jun 27, 2022 · 9 comments
Assignees
Labels
bug Something isn't working exporter/elasticsearch priority:p2 Medium

Comments

@jsirianni
Copy link
Member

jsirianni commented Jun 27, 2022

Describe the bug

When using the Elasticsearch exporter, log timestamp is omitted unless set by an operator.

Steps to reproduce

Use the filelog receiver without any parsing (do not parse a timestamp to the log entry). When viewing in Kibana, logs will show up in year 1970 (Zero value for timestamp).

What did you expect to see?

The log data model has two timestamps. ObservedTimestamp and Timestamp. It is my understanding that vendors should use Timestamp when set, and fall back on ObservedTimestamp when not set.

What did you see instead?

It seems like observed timestamp is getting ignored, and because timestamp is not set, ES exporter uses the zero value for all ingested logs. I cannot select observed timestamp in the Kibana UI, as if it is being stripped from the log entry.

Screenshot from 2022-06-27 14-31-48

What version did you use?

Contrib v0.53.0

What config did you use?

receivers:
    filelog:
      include:
        - /var/log/elasticsearch/elasticsearch_server.json
      start_at: beginning
processors:
    batch:
    resourcedetection:
        detectors:
            - system
        system:
            hostname_sources:
                - os
exporters:
    elasticsearch:
        endpoints:
            - http://localhost:9200
        index: test
    logging:
        logLevel: debug
service:
    pipelines:
        logs/file__source0__debug:
            receivers:
                - filelog
            processors:
                - batch
                - resourcedetection
            exporters:
                - logging
                - elasticsearch

Environment

  • Debian 11

Additional context

See Log collection v0.29.0 "Note to Vendors" on how to use timestamps: https://github.com/open-telemetry/opentelemetry-log-collection/releases/tag/v0.29.0

Logging exporter gives the following log entry

Jun 27 18:35:37 iris observiq-otel-collector[4169]: LogRecord #5
Jun 27 18:35:37 iris observiq-otel-collector[4169]: ObservedTimestamp: 2022-06-27 18:35:37.659048654 +0000 UTC
Jun 27 18:35:37 iris observiq-otel-collector[4169]: Timestamp: 1970-01-01 00:00:00 +0000 UTC
Jun 27 18:35:37 iris observiq-otel-collector[4169]: Severity:
Jun 27 18:35:37 iris observiq-otel-collector[4169]: Body: {"type": "server", "timestamp": "2022-06-27T18:25:34,393Z", "level": "INFO", "component": "o.e.c.m.MetadataDeleteIndexService", "cluster.name": "elasticsearch", "node.name": "iris", "message": "[.ds-logs-generic-default-2022.06.27-000001/x9qdwKGLRsWh0i4JpyIGDA] deleting index", "cluster.uuid": "o3D3ppAvSUyl-Bo6aY0rJQ", "node.id": "E4azM21dRzui5QwHYt7ZFg"  }
Jun 27 18:35:37 iris observiq-otel-collector[4169]: Attributes:
Jun 27 18:35:37 iris observiq-otel-collector[4169]:      -> log_type: STRING(file)
Jun 27 18:35:37 iris observiq-otel-collector[4169]:      -> log.file.name: STRING(elasticsearch_server.json)
Jun 27 18:35:37 iris observiq-otel-collector[4169]: Trace ID:
Jun 27 18:35:37 iris observiq-otel-collector[4169]: Span ID:
Jun 27 18:35:37 iris observiq-otel-collector[4169]: Flags: 0
Jun 27 18:35:37 iris observiq-otel-collector[4169]: LogRecord #6
Jun 27 18:35:37 iris observiq-otel-collector[4169]: ObservedTimestamp: 2022-06-27 18:35:37.659049852 +0000 UTC
Jun 27 18:35:37 iris observiq-otel-collector[4169]: Timestamp: 1970-01-01 00:00:00 +0000 UTC
Jun 27 18:35:37 iris observiq-otel-collector[4169]: Severity:
Jun 27 18:35:37 iris observiq-otel-collector[4169]: Body: {"type": "server", "timestamp": "2022-06-27T18:25:47,022Z", "level": "INFO", "component": "o.e.c.m.MetadataCreateIndexService", "cluster.name": "elasticsearch", "node.name": "iris", "message": "[test] creating index, cause [auto(bulk api)], templates [], shards [1]/[1]", "cluster.uuid": "o3D3ppAvSUyl-Bo6aY0rJQ", "node.id": "E4azM21dRzui5QwHYt7ZFg"  }
Jun 27 18:35:37 iris observiq-otel-collector[4169]: Attributes:
Jun 27 18:35:37 iris observiq-otel-collector[4169]:      -> log.file.name: STRING(elasticsearch_server.json)
Jun 27 18:35:37 iris observiq-otel-collector[4169]:      -> log_type: STRING(file)
Jun 27 18:35:37 iris observiq-otel-collector[4169]: Trace ID:
Jun 27 18:35:37 iris observiq-otel-collector[4169]: Span ID:
Jun 27 18:35:37 iris observiq-otel-collector[4169]: Flags: 0
@jsirianni jsirianni added the bug Something isn't working label Jun 27, 2022
@TylerHelmuth
Copy link
Member

pinging @urso @faec @blakerouse as code owners

@github-actions
Copy link
Contributor

Pinging code owners: @urso @faec @blakerouse. See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions
Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Nov 28, 2022
@JaredTan95
Copy link
Member

JaredTan95 commented Nov 28, 2022

latest version of elasticsearch exporter already fixed it.

Trace ID: 
Span ID: 
Flags: 0
LogRecord #2
ObservedTimestamp: 2022-11-28 06:55:01.352423736 +0000 UTC
Timestamp: 2021-02-16 09:21:17.521307236 +0000 UTC
SeverityText: 
SeverityNumber: Unspecified(0)
Body: Str(2021-02-16T09:21:17.521307236Z stdout F example: 12 Tue Feb 16 09:21:17 UTC 2021)
Attributes:
     -> time: Str(2021-02-16T09:21:17.521307236Z)
     -> log: Str(example: 12 Tue Feb 16 09:21:17 UTC 2021)
     -> log.iostream: Str(stdout)
     -> log.file.path: Str(/var/log/pods/containerd_logs-0_000011112222333344445555666677778888/logs/0.log)
     -> logtag: Str(F)

and es index documents:

{
"_index": ".ds-logs-generic-default-2022.11.28-000001",
"_type": "_doc",
"_id": "mzcEvYQBz1BRB43O_bH3",
"_score": 1,
"_source": {
"@timestamp": "2021-02-16T09:21:17.521307236Z",
"Attributes.log.file.path": "/var/log/pods/containerd_logs-0_000011112222333344445555666677778888/logs/0.log",
"Attributes.log.iostream": "stdout",
"Attributes.log.value": "example: 10 Tue Feb 16 09:21:15 UTC 2021",
"Attributes.logtag": "F",
"Attributes.time": "2021-02-16T09:21:17.521307236Z",
"Body": "2021-02-16T09:21:15.518430714Z stdout F example: 10 Tue Feb 16 09:21:15 UTC 2021",
"Resource.k8s.container.name": "logs",
"Resource.k8s.container.restart_count": "0",
"Resource.k8s.namespace.name": "containerd",
"Resource.k8s.pod.name": "logs-0",
"Resource.k8s.pod.uid": "000011112222333344445555666677778888",
"SeverityNumber": 0,
"TraceFlags": 0
}
},

@TylerHelmuth
Copy link
Member

Closing based on @JaredTan95 comment. @jsirianni please reopen if the issue is not fix.

@jsirianni
Copy link
Member Author

This is not resolved. If Timestamp is zero (1970-01-01T00:00:00Z), the ObservedTimestamp value should be used but it is not.

I took a quick look at the exporter's code and it seems to incorrectly assume that Timestamp is always the right choice. https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/exporter/elasticsearchexporter/model.go#L53

In a perfect world, Timestamp would always be set, however, this is not always the case.

@danielbcorreia
Copy link

Hi all, I'm facing the same issue as above, details on open-telemetry/opentelemetry-rust#1479

In this case, the (Rust) tokio tracing to OpenTelemetry bridge doesn't use the timestamp field (which is optional) and only the ObservedTimestamp is filled in by the API. This should trigger the fallback, but it doesn't and the consequence is having only Elasticsearch records with @timestamp set to the epoch time.

Can we reopen this issue please?

@crobert-1 crobert-1 reopened this Feb 7, 2024
@crobert-1 crobert-1 removed the Stale label Feb 7, 2024
@JaredTan95 JaredTan95 self-assigned this Mar 20, 2024
JaredTan95 added a commit to JaredTan95/opentelemetry-collector-contrib that referenced this issue Mar 20, 2024
Signed-off-by: Jared Tan <[email protected]>
mx-psi pushed a commit that referenced this issue Mar 25, 2024
…amp overwrites the timestamp (#31844)

**Description:** <Describe what has changed.>
<!--Ex. Fixing a bug - Describe the bug and how this fixes the issue.
Ex. Adding a feature - Explain what this achieves.-->

**Link to tracking Issue:**
#11752

---------

Signed-off-by: Jared Tan <[email protected]>
Co-authored-by: Andrzej Stencel <[email protected]>
@ycombinator
Copy link
Contributor

Can this issue be closed now, since #31844 has been merged?

@jsirianni
Copy link
Member Author

Can this issue be closed now, since #31844 has been merged?

Yes, sounds like the issue is resolved.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working exporter/elasticsearch priority:p2 Medium
Projects
None yet
Development

No branches or pull requests

8 participants