This repository has been archived by the owner on Sep 30, 2020. It is now read-only.
etcdctl 3.2.7 endpoint health output defaults to stderr instead of stdout, breaking etcdadm checks #1764
Labels
lifecycle/rotten
Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Hello,
Creating a new cluster with kube-aws 0.14.2, and trying etcd v3.2.7.
Problem: no Etcd snapshots are written to s3.
Checking the logs in the Etcd instances I saw that the cluster was not marked as healthy, despite working quite well. As I had a cluster with kube-aws 0.14.1 and Etcd v3.2.6 running well, I checked the sha1sum of etcdadm in both machines, confirming that the file is the same.
Running some more tests, I've discovered that etcdctl in v3.2.7 sends the output of the endpoint health command to stderr instead of the previous stdout, as follows:
etcdctl 3.2.6 (Old behavior)
Redirecting stdout to /dev/null
Redirecting stderr to /dev/null
Output of journalctl -u etcdadm-save.service
etcdctl 3.2.7 (New behavior)
Redirecting stdout to /dev/null
Redirecting stderr to /dev/null
Output of journalctl -u etcdadm-save.service
This behavior seems to affect the function member_is_ready of etcdadm:
Downgrading etcd to 3.2.6 solved the snapshot issue.
Please see this pull request and this pull request, I think that were the changes in etcd.
The text was updated successfully, but these errors were encountered: