Skip to content
This repository has been archived by the owner on Nov 7, 2018. It is now read-only.

ES Client connection draining #61

Open
so0k opened this issue Nov 3, 2016 · 4 comments
Open

ES Client connection draining #61

so0k opened this issue Nov 3, 2016 · 4 comments

Comments

@so0k
Copy link

so0k commented Nov 3, 2016

As we're using CoreOS auto updates, we get client timeouts when an ES client node is lost.

Does anyone have experience with configuring the ES Clients to do a graceful shutdown / connection draining on the k8s service definitions?

@andrewhowdencom
Copy link

andrewhowdencom commented Nov 3, 2016

Do they exit gracefully if you drain the node with $ kubectl drain {node}? I remember creating a systemd service unit that drained nodes prior to coreos updates (for another issue).

Edit: Even if they don't, signalling a drain might trigger the container exit hooks, and you can use those to clean up.

@pires
Copy link
Owner

pires commented Nov 3, 2016

Not sure how CoreOS signals this to applications but it should be something like CoreOS --(signal)--> Kubernetes --(signal)--> Pod --(signal)--> Elasticsearch container.
Read about Pod Termination here and here.

@so0k
Copy link
Author

so0k commented Nov 3, 2016

Thanks both for the fast response. we'll be looking at coreos/bugs#1274 to have CoreOS signal kubernetes. @andrewhowdencom - if you could share info on how you did the draining?

Was it a ExecStop hook on the kubelet service to signal the API to drain the node first?

@andrewhowdencom
Copy link

@so0k Exactly that

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants