Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix memory leak in Filebeat pipeline acker #12063

Merged
merged 5 commits into from
May 7, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.next.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -87,6 +87,7 @@ https://github.com/elastic/beats/compare/v7.0.0-alpha2...master[Check the HEAD d
- Fix `add_docker_metadata` source matching, using `log.file.path` field now. {pull}11577[11577]
- Add missing Kubernetes metadata fields to Filebeat CoreDNS module, and fix a documentation error. {pull}11591[11591]
- Reduce memory usage if long lines are truncated to fit `max_bytes` limit. The line buffer is copied into a smaller buffer now. This allows the runtime to release unused memory earlier. {pull}11524[11524]
- Fix memory leak in Filebeat pipeline acker. {pull}12063[12063]

*Heartbeat*

Expand Down
9 changes: 8 additions & 1 deletion libbeat/publisher/pipeline/acker.go
Original file line number Diff line number Diff line change
Expand Up @@ -139,19 +139,26 @@ func (a *gapCountACK) ackLoop() {
case <-a.done:
closing = true
a.done = nil
if a.events.Load() == 0 {
// stop worker, if all events accounted for have been ACKed.
// If new events are added after this acker won't handle them, which may
// result in duplicates
return
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This one is tricky and potentially racy. The addEvent function is called asynchronously, meaning that events could still be added although the acker is already closed. It might still be ok if the client rejects events on close and guarantees that no more events will be added before the acker is closed.

Checking the client code, I found the addEvent can indeed be called after the done channel is closed and loop has been returned.

Might be ok, though. The ACKer also ignores pending ACKs if shutdown is forced.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added a comment to explain the situation here


case <-a.pipeline.ackDone:
return

case n := <-acks:
empty := a.handleACK(n)
if empty && closing && a.events.Load() == 0 {
// stop worker, iff all events accounted for have been ACKed
// stop worker, if and only if all events accounted for have been ACKed
return
}

case <-drop:
// TODO: accumulate multiple drop events + flush count with timer
a.events.Sub(1)
a.fn(1, 0)
}
}
Expand Down