You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
1. What kops version are you running? The command kops version, will display
this information.
1.20.1
2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.
1.19.10
3. What cloud provider are you using?
AWS
4. What commands did you run? What is the simplest way to reproduce this issue?
Take a long-running cluster that uses the AWS VPC CNI define the cluster's spec.containerRuntime as containerd, and begin rolling the nodes.
5. What happened after the commands executed?
Cluster refuses to validate because of aws-node pods in CrashLoopBackoff.
Upon examination, we see in the describe output (partial output supplied, for clarity):
6. What did you expect to happen? #10502 changed the mount to containerd.sock, which I presume is mandatory when the underlying node has been upgraded to containerd.
This is a tricky upgrade on a production cluster as we don't want to update all of the pods on the DaemonSet before the underlying nodes have been rolled. Ideally, there would be two DaemonSets, one that selects onto dockershim nodes and one that selects onto containerd nodes, with the former DaemonSet cleaned up after the upgrade, or some other way of ensuring that the DaemonSet has access to the relevant socket as the nodes are upgraded.
The text was updated successfully, but these errors were encountered:
1. What
kops
version are you running? The commandkops version
, will displaythis information.
1.20.1
2. What Kubernetes version are you running?
kubectl version
will print theversion if a cluster is running or provide the Kubernetes version specified as
a
kops
flag.1.19.10
3. What cloud provider are you using?
AWS
4. What commands did you run? What is the simplest way to reproduce this issue?
Take a long-running cluster that uses the AWS VPC CNI define the cluster's
spec.containerRuntime
ascontainerd
, and begin rolling the nodes.5. What happened after the commands executed?
Cluster refuses to validate because of
aws-node
pods inCrashLoopBackoff
.Upon examination, we see in the describe output (partial output supplied, for clarity):
6. What did you expect to happen?
#10502 changed the mount to containerd.sock, which I presume is mandatory when the underlying node has been upgraded to
containerd
.This is a tricky upgrade on a production cluster as we don't want to update all of the pods on the DaemonSet before the underlying nodes have been rolled. Ideally, there would be two DaemonSets, one that selects onto
dockershim
nodes and one that selects ontocontainerd
nodes, with the former DaemonSet cleaned up after the upgrade, or some other way of ensuring that the DaemonSet has access to the relevant socket as the nodes are upgraded.The text was updated successfully, but these errors were encountered: