You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 7, 2018. It is now read-only.
Do they exit gracefully if you drain the node with $ kubectl drain {node}? I remember creating a systemd service unit that drained nodes prior to coreos updates (for another issue).
Edit: Even if they don't, signalling a drain might trigger the container exit hooks, and you can use those to clean up.
Not sure how CoreOS signals this to applications but it should be something like CoreOS --(signal)--> Kubernetes --(signal)--> Pod --(signal)--> Elasticsearch container.
Read about Pod Terminationhere and here.
Thanks both for the fast response. we'll be looking at coreos/bugs#1274 to have CoreOS signal kubernetes. @andrewhowdencom - if you could share info on how you did the draining?
Was it a ExecStop hook on the kubelet service to signal the API to drain the node first?
As we're using CoreOS auto updates, we get client timeouts when an ES client node is lost.
Does anyone have experience with configuring the ES Clients to do a graceful shutdown / connection draining on the k8s service definitions?
The text was updated successfully, but these errors were encountered: