-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
IOError error="closed stream" with fluentd-1.2.5/lib/fluent/plugin/buffer/file_chunk.rb:83 #2391
Comments
From the configuration, multi workers should not read same file because buffer file paths are different. |
Good to know! I'll upgrade to v1.3 and see if we still see this issue. |
Closing this issue for now. If I see it again in versions >= v1.3, I'll reopen then. |
@repeatedly - Turned out that we are still seeing this in Fluentd version v1.4.2:
|
I see. So this problem is reproducible on your environment.
|
Let me try that. |
Seeing the following errors after adding "log.warn e.inspect":
@repeatedly - Does this look familiar? |
We are seeing such errors from time to time. There were a lot of them especially when the agent first started up. After a period of time, they are less likely to happen. |
I think this issue occurs becase system goes out of memory(Cannot allocate memory ) by refer below logs: 2019-04-26 06:16:20 +0000 [warn]: #22 emit transaction failed: error_class=Errno::ENOMEM error="Cannot allocate memory @ io_write - /stackdriver-log-aggregator-persistent-volume/worker22/google_cloud/buffer/buffer.b58768df7aa7590b36475907e84dcd48f.log.meta" location="/opt/google-fluentd/embedded/lib/ruby/gems/2.4.0/gems/fluentd-1.4.2/lib/fluent/plugin/buffer/file_chunk.rb:251:in `write'" tag="stdout" |
That was my first hunch too. We allocated 4GB memory to it alreadythough, so I was wondering if there's any deadlocks. I'll try raising that threshold then see if we still get errors. |
Seems like we need +4GB allocated memory for 20 workers (that was a bit surprising). After I reduced the worker number to 10, it seems to be fine with 4GB memory, and I'm no longer seeing this issue. Closing this ticket. |
I just ran into the same problem for the same reason, and it was hard to debug. Can we make the error message more visible? |
+1 on jkohen's request |
FYI, I'm seeing intermittent |
Fluentd version: v1.2.5
Environment: Fluentd within a container running inside Kubernetes
Docker base image: debian:9.6-slim
Problem
We're seeing the following errors intermittently.
It happened every now and then, but did not seem to crash the agent or affect most of the logs being ingested. It did cause some logs to be dropped.
Any idea what might cause this? Note that we are using multi-worker mode. Is there some race condition among the workers?
Config
The text was updated successfully, but these errors were encountered: