-
Notifications
You must be signed in to change notification settings - Fork 528
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support spooling to disk #5441
Comments
Some more thoughts on extending our use of badger: In addition to having just one on-disk storage mechanism, it will also enable support for encryption at rest and compression. Naturally badger supports storing arbitrary Another aspect to consider is when in the process we should be spooling to disk. If we use the libbeat disk queue, we're restricted to spooling when publishing the events (after processing and transformation). This means that if our model processors and transformation logic are not fast enough, we still risk dropping events. I don't think there is much risk of this at the moment, but that could change in the future. The big question for me at the moment is whether it is (or can be reasonably made) fit-for-purpose. I think we would only use key lookup for tail-based sampling. For spooling we would ideally maintain a read position in the value log (~queue), advancing the position as we send the events to libbeat. We might be able to approximate this with badger.Stream. |
I've been curious about using prometheus' WAL, but if key lookup is a requirement for tail-based sampling then I believe it is a non-starter. |
@stuartnelson3 we definitely need key lookup for tail-based sampling, but it's not a strict requirement that we only store data once on disk - that's just a nice-to-have. So, worth considering. |
sqlite in WAL mode (or not in WAL mode) might also be of interest: https://www.sqlite.org/wal.html |
I think it would be very unfortunate if now that the libbeat disk queue went GA we add another queue to the system. @faec is working on some refactoring of the libbeat pipeline. Please sync up with her directly to see how the pieces can fit together before we add one more option. |
We are not planning to implement this any time soon. It's conceivable that we might make use of the Elastic Agent Shipper, which does have disk-based queuing built in. We'll open this back up if we plan to implement it. |
APM Server should support spooling events to disk, to avoid dropping data in the event that Elasticsearch is unavailable or temporarily overwhelmed.
Beats has a "disk queue", which is on its way to GA status: elastic/beats#22602. We have been waiting for this so that we can use it in APM Server.
We also have badger-based storage to support tail-based sampling. It would be a shame to have two different local storage mechanisms (not to mention storing data twice), so we may want to consider generalising this instead of using the libbeat disk queue.
The text was updated successfully, but these errors were encountered: