-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Journalbeat has no ability to not process old journal events #17758
Comments
Pinging @elastic/integrations-services (Team:Services) |
I think it might be possible to seek the cursor based on time by using |
Pinging @elastic/elastic-agent-data-plane (Team:Elastic-Agent-Data-Plane) |
Hi! We're labeling this issue as |
It's relevant |
Hi! We're labeling this issue as |
Still need this |
(journalbeat is no longer around and the functionality is part of Filebeat.) This is possible. The following configuration will at most initially backfill -168h of data (one week). Then after that it will resume from the saved cursor when restarted (because a cursor for
|
Describe the enhancement:
Journalbeat has no ability to not process journal events older than N ${time_units}.
For example, filebeat has
ignore_older
option which relies on logfile modification time (general approach for plain text logfiles).Journalbeat has no similar option even for the same approach. It reads the latest event from any journal file it finds even in "tail" mode.
Describe a specific use case for the enhancement or feature:
We have k8s cluster and want to see logs in ELK not only from containers, but from daemons running on host system also.
We reconfigure docker's log-driver option from json-file to journald. Then we deploy journalbeat daemonset instead of filebeat, of couse with mounted /var/log/journal and other valuable dirs from host system.
Journalbeat pulls and processes all events from journal, even if they are a year old. Then events go to logstash.
We don't want to grow our ELK in size uncontrolled. So we split indexes on a per day basis. And keep open indexes for the last week only. Older indexes are closed.
Now logstash tries to write events to the closed indexes. This leads to a filedescriptor leak on logstash (long-living known logstash bug^W feature ;)).
I want a way to tell journalbeat to not process journal event older than a week.
Describe possible workaround
We configure journald to rotate its journal files every day and keep them only for a week. That's it.
So nothing in journalbeat configuration unfortunately.
The text was updated successfully, but these errors were encountered: