You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 20, 2022. It is now read-only.
We have a similar issue then #281 but with different effects. We have a Kubernetes cluster deployed with kops in AWS. The master nodes with etcd-manager 3.0.202005.31 are running in EC2 instances in the same availability zone (this is not a production cluster) . The cluster is started every morning and destroyed every night.
When a master node starts the etcd-manager mount the EBS volume (where is the Kubernetes API objects) based in the next tags:
This tags are common in all volumes of the ETCD cluster nodes and how they are in the same availability zone the etcd-manager cannot identify which volume belongs to which node. This causes two effects:
Sometimes one ETCD node has attached the volume of the other ETCD node and vice versa. Although it's not a desired scenario really this don't take bad effects in the clusters because the information is synchronized and the kubernetes cluster can run normally
Other times, one ETCD node mount the 3 EBS volumes and this is a bad scenario because only one ETCD node can run and the Kubernetes API is not accessible.
Somebody have the same issue and have a workaround?
We think it's necessary to add another tag to permit to the etcd-manager identify the volume that belongs to the node and modify the boot to check it. Maybe it's possible checking the value of the k8s.io/etcd/main:
node / node1,node2,node3
Thanks to everybody
The text was updated successfully, but these errors were encountered:
Hi;
We have a similar issue then #281 but with different effects. We have a Kubernetes cluster deployed with kops in AWS. The master nodes with etcd-manager 3.0.202005.31 are running in EC2 instances in the same availability zone (this is not a production cluster) . The cluster is started every morning and destroyed every night.
When a master node starts the etcd-manager mount the EBS volume (where is the Kubernetes API objects) based in the next tags:
I1218 05:50:26.648925 4224 main.go:279] Mounting available etcd volumes matching tags [k8s.io/etcd/main k8s.io/role/master=1 kubernetes.io/cluster/kubernetes.cluster.test=owned]; nameTag=k8s.io/etcd/main
This tags are common in all volumes of the ETCD cluster nodes and how they are in the same availability zone the etcd-manager cannot identify which volume belongs to which node. This causes two effects:
Sometimes one ETCD node has attached the volume of the other ETCD node and vice versa. Although it's not a desired scenario really this don't take bad effects in the clusters because the information is synchronized and the kubernetes cluster can run normally
Other times, one ETCD node mount the 3 EBS volumes and this is a bad scenario because only one ETCD node can run and the Kubernetes API is not accessible.
Somebody have the same issue and have a workaround?
We think it's necessary to add another tag to permit to the etcd-manager identify the volume that belongs to the node and modify the boot to check it. Maybe it's possible checking the value of the k8s.io/etcd/main:
node / node1,node2,node3
Thanks to everybody
The text was updated successfully, but these errors were encountered: