-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Spooling to disk GA #6859
Comments
any update ? |
@opsnull an initial beta version of spooling to disk was included in the 6.3.0 release. |
Regarding "Re-evaluate monitoring metrics", it would be useful to be able to observe the number of events in the queue as well as the age of the oldest item. |
Looking at the updated list and the Check spool file does not break if disk has not enough space to finish a write transaction I think that list is prioritized? Not sure how common btrfs is in production? Where redhat is dropping support https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/7.4_Release_Notes/chap-Red_Hat_Enterprise_Linux-7.4_Release_Notes-Deprecated_Functionality.html. But I believe for Suse btrfs is the default.. |
For me, as a dev, I think having CLI tooling to inspect and recover the PQ is really an important thing and we should probably do it first. Been able to extract the actual data outside of beats would be beneficial and would give better confidence. |
Closing it for now as it will be done through the shippers work - elastic/elastic-agent-shipper#7 |
Add spooling to disk to beats. Spooling all events to disk is useful for beats if the output is blocked or not fast enough to deal with bursts of events. With spooling to disk available, metricbeat modules will not be blocked and filebeat has a way of copying events from very fast rotating log files.
Requirements:
Tasks:
TestResizeFile
in go-txfile)libbeat/scripts/cmd/stress_pipeline
andlibbeat/publisher/pipeline/stress
.The text was updated successfully, but these errors were encountered: