-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Limit data production from receivers to avoid backpressure scenarios #29410
Comments
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
I would view this feature request as supplementary to the linked PR. It is still useful to explicitly limit the production of data without needing to encounter backpressure and trigger retries. It would help to avoid potential loss of data due to retries being exhausted in a situation where Additionally, relying on the exporter to trigger the slowdown is not feasible in all architectures. For example, if you're sending from |
I think you have some good points here, that backpressure should not necessarily come from exporters only. I'm thinking it would make the most sense as a per-receiver config, along the lines of consumerretry, which can be embedded into any receiver config. I'd be curious to here what @dmitryax thinks of this. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
For journald I wonder if there is a way to rely on max system PIPE size, so if we don;t read from stdout fast enough, journalctl will wait until we start again. |
Yeah... I don't think that we'd be able to keep using the |
Please look at the new exporterhelper batch sender here: https://github.com/open-telemetry/opentelemetry-collector/blob/main/exporter/exporterhelper/batch_sender.go it implements the ability to apply backpressure back to receivers. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
This issue has been closed as inactive because it has been stale for 120 days with no activity. |
Can we reopen this? It's still valid |
Definitely still valid and will become a HUGE detractor for OTel when more and more log focused people catch wind. I have raised this multiple times as a major issue in the fundamental logic of handling enterprise logs. This doesnt even need to be as dynamic as a rate limit (tho that sounds awesome). We have been pausing the input flow for like 20 years at Splunk, and all we do is pause reading and wait for the downstream to show its accepting data again. Pretty sure collectors like beats and the mentioned fluent* do the same. Way easier for a user to add disk and adjust log rotation ( dont get me started on k8s ridiculous file rotation defaults) logic to extend the time they can experience congestion or outage. Picking up logs when you have nowhere to send them balloons memory usage and is just not wise when the logs are already on durable disk. This is sorely needed or its gunna be some tough times for folks adopting OTel in high stakes enviros for logs. Spoiler alert...already has for some massive regulated customers. |
Thanks for emphasizing this issue @matthewmodestino.
I think this is the key point but it's not clear to me how it should work. Back pressure in the data pipeline is a partial and imperfect solution, because as @wbh1 described there may be negative consequences downstream, or simply because downstream does not apply back pressure (e.g. asynchronous components).
Am I understanding correctly that this is similar to back pressure, except once failures begin the receiver pauses entirely and waits for a direct communication that it should resume? I can see how that would work well, but this would mean direct communication between components, which may require some new architecture. Do you have any thoughts on how we should accomplish this? I wonder if this needs to be a global framework for the collector, complimentary to the backpressure mechanism described in #6908 (comment). Maybe it could work something like this:
There would be some details to work out here but I'm curious if others think this may be worth exploring further. cc: @open-telemetry/collector-approvers @open-telemetry/collector-contrib-approvers Some more detailed thoughts on this design:
|
I think we have found a way forward working with my colleague @atoulme at Splunk and using a bunch of work from over the last 9 months originating here We basically have set both the filelog reciver, and the exporter to retry on failure, and set the timer to 0 to retry forever, and remove the batch processor, which has been moved into the exporter, allowing the filelog reciever to "pause" when it recieves errors from downstream (exporter). Won't hijack this issue further, as the original ask, while similar is likely more about a "throttle", Which would also be cool (similar to Splunk MaxKBps limit setting) ...but glad to see we are getting closer to a more enterprise grade answer, and at least OP may have a config option to at least try not to drop data if receiving throttling messages etc... Still a ton of testing to do, but glad we seeing it get a bit closer. |
We have a similar feature request/use case I would liket o add here. If wished, I can create an own issue for that. We run an observability backend (Elasticsearch) shared by many teams and services (1000s). The services run in kubernetes clusters and we want to collect the logs of all pods. Problem: If a service/pod becomes very noisy for some reason, it can burden the backend so much that all other teams feel it. In short: One team can ruin the day for all others. We would like to limit the effect a single instance or service can have on the observability backend. A solution could be implemented in different ways:
|
@juergen-kaiser-by, I think a new issue makes sense to evaluate the various options. If there's consensus this should be solved in the receiver, we can take a closer look on this issue. |
@juergen-kaiser-by - I think your use case may be covered by the open issue at #6908 |
Depends on the implementation details and how the collector is deployed to srape the logs (https://opentelemetry.io/docs/kubernetes/collector/components/#filelog-receiver). We need rate limiting based on pod attributes. We could rate limit based on pod uid (which can be parsed in the filelog receiver from the file name) but I do not see how we can properly report happening throttling if the collector is deployed in the recommended mode (Daemonset) because we need other pod information (labels) to map the throtteling back to a deployment and/or service. If we deploy the collector as a sidecar, we could inject that data into the collector config via env vars but that is an "advanced configuration" and potentially wastes recources as we run thousands of pods in our clusters. |
Will create a new ticket for this. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
/label never-stale |
Still relevant |
@juergen-kaiser-by The right answer here might be use opamp/supervisor as aback channel for health and performance data of the collector. |
Component(s)
receiver/filelog, receiver/journald
Is your feature request related to a problem? Please describe.
This feature request is based on the FluentBit feature surrounding buffering described here: https://docs.fluentbit.io/manual/administration/backpressure
In our environment (and I'm sure many others), the destinations we send telemetry data to apply some known rate limits to the rate at which we are able to transmit data. For example, we operate a Loki cluster that applies a per-stream rate limit of XMB/s. However, there is currently no way (that I know of, at least) in otelcol to apply any sort of throttling to the amount of data produced [without losing data like through sampling].
In our specific use case, we're using the
filelog
receiver. Under heavy load, some systems may exceed the aforementioned rate limit in the amount of data they're producing. However, there is no good way to communicate this back to the data producer. This is because we observe the rate limits on our gateways and there's currently no way to limit the data coming into the gateway, although #6908 probably solves this.Regardless, that rate limiter will just shift the problem of getting a clogged-up queue from the gateway to the agent. Instead, we'd rather be able to limit the speed at which data is produced to prevent either the agent or the gateway from getting backed up with retries.
Describe the solution you'd like
For receivers where it makes sense (e.g.
filelog
,journald
, basically anything that is reading from an external data source instead of having data pushed to it), there could be a way to specify a maximum ingestion rate. I think this should be configured per-receiver, but I'm sure there's an argument to be made that it should be a processor.Measuring by size is preferable, but you could also limit by a count of data points (e.g. log lines). When this limit is reached, ingestion is paused on the receiver until the next interval. At the next interval, the receiver will resume ingestion at the same offset where it left off so there is no data loss.
The receiver is expected to be allowed to fall behind, but this is an acceptable tradeoff in order to ensure that -- eventually -- all data will be present. This is in contrast to the current system in which backpressure scenarios can result in loss of data due to exhausting retries.
Describe alternatives you've considered
The alternatives I've considered are outlined above, but all result in different types of backpressure scenarios just shifting where it occurs to different places in a pipeline. To my knowledge, the only way to prevent backpressure is to either scale up your exporters' destination(s) to accept data faster (not always in control of users) or to throttle the production of data.
Additional context
Some receivers like
journald
don't currently have a system to track their progress in receiving data the way thatfilelog
does with offsets. Instead, thejournald
receiver constantly receives data by tailing the journal. In such a case, pausing ingestion would still cause memory to balloon. This would then trigger the memory_limiter, if configured, and data would still be dropped. Probably not desirable, but that's a separate issue for implementing that sort of logic in that receiver.The text was updated successfully, but these errors were encountered: