-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Getting BufferChunkOverflowError from non-buffered copy plugin! #2928
Comments
This error is kafka2's error which copy plugin emits. This error is a bit confusing. How about adding class name where the error occurs to the error log like fluentd/lib/fluent/plugin/out_copy.rb Line 64 in cc97aa4
it's wired... The error can only happen when a chunk is over chunk_limit_size. fluentd/lib/fluent/plugin/buffer.rb Line 565 in cc97aa4
fluentd/lib/fluent/plugin/buffer.rb Line 711 in cc97aa4
|
@ganmacs Interesting ... So you're saying that, even though the error message comes from the copy plugin -- which we know because the error message has the copy plugin's ID: [publish_logs_to_outputs] -- the actual error was generated by the kafka2 plugin? |
Yes. Handling such errors outside of plugins is the design of fluentd. |
|
@ganmacs Thank you for explaining how the error handling works -- that answers my first question. Do you have any ideas regarding my second question: Why is BufferChunkOverflowError being triggered by logs that are < chunk_limit_size (600 KB)? |
I'm not sure. I can't reproduce it. Do you have a reproduction config and input data? |
Thank you. I just checked my records -- it is still happening, but not frequently. Over 24 hours, it happened about ~116 times. Our current config is fairly complex though -- let me see if I can get a simpler config that reproduces the same issue, and I will upload here -- if not, I will just upload the complex config here. |
This issue has been automatically marked as stale because it has been open 90 days with no activity. Remove stale label or comment or this issue will be closed in 30 days |
This issue was automatically closed because of stale in 30 days |
Describe the bug
In my Fluentd config, I use the copy plugin to send the same logs to two Kafka clusters (each using the kafka2 plugin). In each kafka2 section, I set the max buffer chunk size (i.e. chunk_limit_size) to 600 KB (600,000 bytes).
(See Fluentd config below)
I am seeing multiple error messages like this:
There are two problems with these error messages:
Expected behavior
What I expect is:
Your Environment
Your Configuration
Your Error Log
(copied from above)
I am seeing multiple error messages like this:
Additional context
To clarify, most of the error messages are what I expect, and indicate that the kafka2 config (and chunk_limit_size config) are working as expected.
For example, most of the error messages look like this:
There is no problem with these error messages, because the error is coming from the buffered kafka2 plugin, and the error is being generated by a log > chunk_limit_size (600 KB) -- i.e. these error messages are expected, because I expect logs > 600 KB to be ignored by the kafka2 plugin.
My problem is that I ALSO sometimes see the weird error messages at the top, which come from the non-buffered copy plugin, and complain about logs that are < chunk_limit_size (600 KB)
The text was updated successfully, but these errors were encountered: