You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We use kops to configure Kubernetes clusters on EC2 instances. After upgrading kubernetes from 1.19.15 to 1.20.11, we executed usual kops rolling update and as result nodes don't come up cause aws-node container is failing.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 16m default-scheduler Successfully assigned kube-system/aws-node-xq6fz to ip-*-*-8-42.eu-west-2.compute.internal
Normal Pulled 16m kubelet Successfully pulled image "602401143452.dkr.ecr.eu-west-2.amazonaws.com/amazon-k8s-cni-init:v1.7.10" in 368.736841ms
Normal Pulled 16m kubelet Successfully pulled image "602401143452.dkr.ecr.eu-west-2.amazonaws.com/amazon-k8s-cni-init:v1.7.10" in 196.850157ms
Normal Pulled 16m kubelet Successfully pulled image "602401143452.dkr.ecr.eu-west-2.amazonaws.com/amazon-k8s-cni-init:v1.7.10" in 292.006114ms
Warning Failed 16m (x4 over 16m) kubelet Error: failed to start container "aws-vpc-cni-init": Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"process_linux.go:415: setting cgroup config for procHooks process caused \\\"failed to write \\\\\\\"10000\\\\\\\" to \\\\\\\"/sys/fs/cgroup/cpu,cpuacct/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e5bd132_e308_415c_aba3_afe2addf216b.slice/docker-aws-vpc-cni-init.scope/cpu.cfs_period_us\\\\\\\": write /sys/fs/cgroup/cpu,cpuacct/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e5bd132_e308_415c_aba3_afe2addf216b.slice/docker-aws-vpc-cni-init.scope/cpu.cfs_period_us: invalid argument\\\"\"": unknown
Normal Pulled 16m kubelet Successfully pulled image "602401143452.dkr.ecr.eu-west-2.amazonaws.com/amazon-k8s-cni-init:v1.7.10" in 269.844621ms
Normal Pulling 15m (x5 over 16m) kubelet Pulling image "602401143452.dkr.ecr.eu-west-2.amazonaws.com/amazon-k8s-cni-init:v1.7.10"
Normal Created 15m (x5 over 16m) kubelet Created container aws-vpc-cni-init
Normal Pulled 15m kubelet Successfully pulled image "602401143452.dkr.ecr.eu-west-2.amazonaws.com/amazon-k8s-cni-init:v1.7.10" in 319.840568ms
Warning BackOff 106s (x69 over 16m) kubelet Back-off restarting failed container
Attach logs
# bash /opt/cni/bin/aws-cni-support.sh
bash: /opt/cni/bin/aws-cni-support.sh: No such file or directory
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
Kubernetes version (use kubectl version): v1.20.11
CNI Version
OS (e.g: cat /etc/os-release): NAME="CentOS Linux 7 (Core)"
Based on the logs the init container process is having issues to start. The logs are not present because, init container could not start.
I found this Issue which could be related to what you are seeing kubernetes/kubernetes#72878 (comment)
This doesn't look like an issue related to aws-vpc-cni. So could you check the above comment
What happened:
We use kops to configure Kubernetes clusters on EC2 instances. After upgrading kubernetes from
1.19.15
to1.20.11
, we executed usualkops rolling update
and as result nodes don't come up causeaws-node
container is failing.Attach logs
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
kubectl version
): v1.20.11cat /etc/os-release
): NAME="CentOS Linux 7 (Core)"uname -a
): Initial commit of amazon-vpc-cni-k8s #1 SMP Tue Feb 18 14:02:23 CET 2020 x86_64 x86_64 x86_64 GNU/LinuxThe text was updated successfully, but these errors were encountered: