-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
otelconsumer: handle entity too large errors #41523
Conversation
The elasticsearchexporter does not handle a 413 Request Entity too Large from Elasticsearch, only forwarding the error back to the client. When using the batcher config in the ES exporter, it runs synchronously, any error reported can be intercepted in the otelconsumer. When using the batch processor this happens asynchronously and there is no way to handle the error. If we can intercept an entity too large error, split the batch and retry.
This pull request does not have a backport label.
To fixup this pull request, you need to add the backport labels for the needed
|
|
Pinging @elastic/elastic-agent-data-plane (Team:Elastic-Agent-Data-Plane) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks.
* otelconsumer: handle entity too large errors The elasticsearchexporter does not handle a 413 Request Entity too Large from Elasticsearch, only forwarding the error back to the client. When using the batcher config in the ES exporter, it runs synchronously, any error reported can be intercepted in the otelconsumer. When using the batch processor this happens asynchronously and there is no way to handle the error. If we can intercept an entity too large error, split the batch and retry. * use logp.TestingSetup (cherry picked from commit 9c34e4e)
* otelconsumer: handle entity too large errors The elasticsearchexporter does not handle a 413 Request Entity too Large from Elasticsearch, only forwarding the error back to the client. When using the batcher config in the ES exporter, it runs synchronously, any error reported can be intercepted in the otelconsumer. When using the batch processor this happens asynchronously and there is no way to handle the error. If we can intercept an entity too large error, split the batch and retry. * use logp.TestingSetup (cherry picked from commit 9c34e4e) Co-authored-by: Mauri de Souza Meneguzzo <[email protected]>
This code was initially added in elastic#41523 because of a limitation from the elasticsearch exporter. The elasticsearch exporter has been updated to enforce flush::max_bytes for the batcher extension and will automatically split the batch if it exceeds the limit. This error is now fixed in the collector v0.115.0. See open-telemetry/opentelemetry-collector-contrib#36396.
This code was initially added in #41523 because of a limitation from the elasticsearch exporter. The elasticsearch exporter has been updated to enforce flush::max_bytes for the batcher extension and will automatically split the batch if it exceeds the limit. This error is now fixed in the collector v0.115.0. See open-telemetry/opentelemetry-collector-contrib#36396.
This code was initially added in #41523 because of a limitation from the elasticsearch exporter. The elasticsearch exporter has been updated to enforce flush::max_bytes for the batcher extension and will automatically split the batch if it exceeds the limit. This error is now fixed in the collector v0.115.0. See open-telemetry/opentelemetry-collector-contrib#36396. (cherry picked from commit dbeb9cd)
…41971) This code was initially added in #41523 because of a limitation from the elasticsearch exporter. The elasticsearch exporter has been updated to enforce flush::max_bytes for the batcher extension and will automatically split the batch if it exceeds the limit. This error is now fixed in the collector v0.115.0. See open-telemetry/opentelemetry-collector-contrib#36396. (cherry picked from commit dbeb9cd) Co-authored-by: Mauri de Souza Meneguzzo <[email protected]>
…41911) This code was initially added in elastic#41523 because of a limitation from the elasticsearch exporter. The elasticsearch exporter has been updated to enforce flush::max_bytes for the batcher extension and will automatically split the batch if it exceeds the limit. This error is now fixed in the collector v0.115.0. See open-telemetry/opentelemetry-collector-contrib#36396.
Proposed commit message
The elasticsearchexporter does not handle a 413 Request Entity too Large from Elasticsearch, only forwarding the error back to the client.
When using the batcher config in the ES exporter, it runs synchronously, any error reported can be intercepted in the otelconsumer. When using the batch processor this happens asynchronously and there is no way to handle the error. If we can intercept an entity too large error, split the batch and retry.
See open-telemetry/opentelemetry-collector-contrib#36163
Checklist
CHANGELOG.next.asciidoc
orCHANGELOG-developer.next.asciidoc
.Author's Checklist
Related issues