-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
logstash stop reading new lines #205
Comments
hi, i found that maybe the problem is inode number reuse, but not sure about that, i'll look at it later... |
It's not the problem of inode number reuse, logstash still stop reading new lines when i set "sincedb_clean_after=>1.5" nearly 46 hours later. |
Hey guys, I'm having the same problem with logstash 6.4.0 on AMAZON LINUX 2
I changed the log level to trace, and all I get are these 2 lines that repeat them self:
|
I know what this bug is. The fix to exit the stuck loop is not hard but Are you using logrotate? The copy/truncate strategy can cause this when the file is unexpectedly truncated while the file input is still looping through the remaining unread content and gets stuck in this loop. |
The processing is lagging behind, i.e. bytes read is less that file size. 1. The grow handler begins to a read of the remaining bytes from an offset - as a loop B inside loop A. 2. While the code is in loop B, the file is truncated and sysread raises an EOF error that breaks out of loop B but the watched_file thinks it still has more to read (the truncation has not been detected at this time), loop A retries loop B again without success. Fixes logstash-plugins#205
…or. (#221) * Fix endless outer loop when inner loop gets an error. The processing is lagging behind, i.e. bytes read is less that file size. 1. The grow handler begins to a read of the remaining bytes from an offset - as a loop B inside loop A. 2. While the code is in loop B, the file is truncated and sysread raises an EOF error that breaks out of loop B but the watched_file thinks it still has more to read (the truncation has not been detected at this time), loop A retries loop B again without success. Fixes #205 * change to flag_read_error
@ThomasLau |
@guyboertje we're observing exactly the same behaviour, occurring daily.
Since switching to logstash-file-input plugin 4.1.8 two days ago, we haven't seen this happen again. edit: hadn't noticed that this issued had been closed - apologies! |
It is not only OK to confirm the fix in a closed issue, it is good because future readers get to see the confirmation. Thanks. |
hi guys, this is the system info.
each line is a json string, size of each line may be 2kB-2MB
occasionally,about 3-6 days
i change the log level to DEBUG, only found that:
`
.....
251605:[2018-09-04T12:59:16,989][DEBUG][filewatch.tailmode.handlers.grow] read_to_eof: get chunk
251606:[2018-09-04T12:59:16,990][DEBUG][filewatch.tailmode.handlers.grow] read_to_eof: get chunk
251607:[2018-09-04T12:59:16,991][DEBUG][filewatch.sincedbcollection] writing sincedb (delta since last write = 1)
251608:[2018-09-04T12:59:17,006][DEBUG][filewatch.tailmode.handlers.grow] read_to_eof: get chunk
251609:[2018-09-04T12:59:17,007][DEBUG][filewatch.sincedbcollection] writing sincedb (delta since last write = 1)
251610:[2018-09-04T12:59:18,021][DEBUG][filewatch.tailmode.handlers.grow] read_to_eof: get chunk
251611:[2018-09-04T12:59:18,023][DEBUG][filewatch.tailmode.handlers.grow] read_to_eof: get chunk
251612:[2018-09-04T12:59:18,023][DEBUG][filewatch.tailmode.handlers.grow] read_to_eof: get chunk
251613:[2018-09-04T12:59:18,023][DEBUG][filewatch.sincedbcollection] writing sincedb (delta since last write = 1)
251614:[2018-09-04T12:59:18,038][DEBUG][filewatch.tailmode.handlers.grow] read_to_eof: get chunk
251615:[2018-09-04T12:59:18,940][DEBUG][logstash.agent ] Reading config file {:config_file=>"/data/logstash/logstash-5.6.0_fix/config/dc-z
x.conf"}
251616:[2018-09-04T12:59:18,940][DEBUG][logstash.agent ] no configuration change for pipeline {:pipeline=>"main"}
251617:[2018-09-04T12:59:21,254][DEBUG][logstash.pipeline ] Pushing flush onto pipeline
251618:[2018-09-04T12:59:21,262][DEBUG][logstash.pipeline ] Flushing {:plugin=>#<LogStash::FilterDelegator:0x75f79e76 @metric_events_out
....
`
the sincedb was last updated at: 2018-09-04T12:59:18, but new tailed lines of this file was iggnored after that.
from the log above, i think it seems that logstash filewatch.tailmode.handlers found the new line, but didn't write it to the output.
however everything works fine after i restart logstash until it got the same log as above again.
The text was updated successfully, but these errors were encountered: