-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Filebeat OOMs on very long lines #19500
Comments
Pinging @elastic/integrations-services (Team:Services) |
Opened a PR that should fix this problem. In the test case, I believe, there is a typo where the author mentioned |
Yes it is a typo. Thanks for your change! It looks like a good candidate since it would prevent filebeat from being OOM killed, however I think it could be improved by just skipping the big line instead of returning from the reader |
Sure, can update the PR to do that. |
Repro
Note: The config specifies a 1MB limit on a single file
Run this bash command:
Note: This command creates a 100MB file called "readme.log" and mounts it to filebeat as the
/test/readme.log
file. It sets a 50MB limit in the docker container.Expected: command runs file, filebeat reads 1MB of data from readme.log before realizing the line is too long, and then seeks without buffering the rest of the line in memory.
Actual: filebeat reads all 100MB of the file into memory searching for a newline and OOMs
This is pretty bad for us, if we make a mistake and log a huge line filebeat gets OOM killed. Filebeat should stop buffering a line once it exceeds max_bytes
The text was updated successfully, but these errors were encountered: