-
Notifications
You must be signed in to change notification settings - Fork 452
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: opentelemetry-appender-tracing not setting log timestamp #1479
Comments
@danielbcorreia what change is expected here in |
I believe the issue - open-telemetry/opentelemetry-collector-contrib#11752 - needs to be reopened. If the timestamp is empty, the observed_timestamp should be used. Similar approach is used in UserEvents exporter - https://github.com/open-telemetry/opentelemetry-rust-contrib/blob/a3e0d18247d972e36a386785e50ca78b2ceec234/opentelemetry-user-events-logs/src/logs/exporter.rs#L201. Also, please note that the native C++ elastic exporter populates both |
Hey, thanks - I wasn't aware that the timestamp wasn't a part of the tracing Event/Metadata, had a look and realised it was handled by the To that end, wouldn't the assignment of the timestamp (as opposed to the observed timestamp) mimic the behaviour of Not saying this should be the way forward, I'm quite new to the Rust ecosystem myself so just trying to make sense of it all.
I'll reach out on that original issue to see if we can reopen it, seems to make sense regardless. |
@danielbcorreia While the opentelemetry-appender-tracing subscriber can set the log timestamp on behalf of the tokio library, the timestamp may not represent the true timestamp when the log was created. Consider a situation where the application utilizes several layers for Tokio tracing, with opentelemetry-appender-tracing positioned at the end of this sequence: let layer1 = ...; // Some Layer
let layer2 = ...; // Another Layer
let layer3 = layer::OpenTelemetryTracingBridge::new(&provider);
let subscriber = Registry::default()
.with(layer1)
.with(layer2)
.with(layer3) In this setup, if the processing by layer1 and layer2 consumes, for instance, 10 milliseconds, the timestamp assigned by opentelemetry-appender-tracing would inaccurately be 10 milliseconds later than the true creation time of the log. Given this potential discrepancy, it may be better to avoid logging a timestamp at all, rather than risk recording a timestamp that could be incorrect or misleading. |
Closing as no further action needed in this repo. open-telemetry/opentelemetry-collector-contrib#31844 is already merged in collector, so original issue should be resolved now. |
What happened?
Logs exported from
opentelemetry-appender-tracing
are missing a timestamp, this seems to be intentional in some way:https://github.com/open-telemetry/opentelemetry-rust/blob/main/opentelemetry-appender-tracing/src/layer.rs#L175-L176
Although the comment mentions the
ObservedTimestamp
not theTimestamp
itself.This results in an issue down the line in the OpenTelemetry Collector, as it doesn't fallback to
ObservedTimestamp
when the timestamp is empty - I suspect this issue, although closed, is related: open-telemetry/opentelemetry-collector-contrib#11752This happens even using the basic example:
https://github.com/open-telemetry/opentelemetry-rust/blob/main/opentelemetry-appender-tracing/examples/basic.rs
In the end, for my particular use-case, it means that exporting logs from
tracing
to OTEL to the OTEL Collector and then to Elasticsearch results in all records to be populated with the epoch timestampJan 1, 1970 @ 01:00:00.000
.API Version
0.21
SDK Version
0.21
What Exporters are you seeing the problem on?
N/A
Relevant log output
The text was updated successfully, but these errors were encountered: