csi-snapshot-controller-7b6d59f669-8gg5g became leader
kube-system
default-scheduler
csi-azuredisk-node-s7kx7
Scheduled
Successfully assigned kube-system/csi-azuredisk-node-s7kx7 to aks-nodepool1-35288426-vmss000001
kube-system
default-scheduler
coredns-autoscaler-6f8964bbf7-sdj2g
FailedScheduling
0/1 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
kube-system
default-scheduler
csi-azurefile-node-5q2kg
Scheduled
Successfully assigned kube-system/csi-azurefile-node-5q2kg to aks-nodepool1-35288426-vmss000001
kube-system
daemonset-controller
csi-azuredisk-node
SuccessfulCreate
Created pod: csi-azuredisk-node-s7kx7
kube-system
default-scheduler
cloud-node-manager-p25v4
Scheduled
Successfully assigned kube-system/cloud-node-manager-p25v4 to aks-nodepool1-35288426-vmss000001
kube-system
daemonset-controller
cloud-node-manager
SuccessfulCreate
Created pod: cloud-node-manager-p25v4
kube-system
default-scheduler
kube-proxy-cx28f
Scheduled
Successfully assigned kube-system/kube-proxy-cx28f to aks-nodepool1-35288426-vmss000001
kube-system
default-scheduler
konnectivity-agent-7cf7879f5f-ngrcw
FailedScheduling
0/1 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
kube-system
daemonset-controller
csi-azurefile-node
SuccessfulCreate
Created pod: csi-azurefile-node-5q2kg
kube-system
default-scheduler
metrics-server-6454f667df-z5dq8
FailedScheduling
0/1 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
kube-system
default-scheduler
metrics-server-6454f667df-9mfff
FailedScheduling
0/1 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
default
kubelet
aks-nodepool1-35288426-vmss000001
aks-nodepool1-35288426-vmss000001
NodeAllocatableEnforced
Updated Node Allocatable limit across pods
kube-system
daemonset-controller
kube-proxy
SuccessfulCreate
Created pod: kube-proxy-cx28f
kube-system
default-scheduler
konnectivity-agent-7cf7879f5f-fn7xh
FailedScheduling
0/1 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
kube-system
default-scheduler
coredns-75bbfcbc66-8mgwm
FailedScheduling
0/1 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
default
kubelet
aks-nodepool1-35288426-vmss000001
aks-nodepool1-35288426-vmss000001
Starting
Starting kubelet.
default
kubelet
aks-nodepool1-35288426-vmss000001
aks-nodepool1-35288426-vmss000001
InvalidDiskCapacity
invalid capacity 0 on image filesystem
(x2)
default
kubelet
aks-nodepool1-35288426-vmss000001
aks-nodepool1-35288426-vmss000001
NodeHasSufficientMemory
Node aks-nodepool1-35288426-vmss000001 status is now: NodeHasSufficientMemory
(x2)
default
kubelet
aks-nodepool1-35288426-vmss000001
aks-nodepool1-35288426-vmss000001
NodeHasNoDiskPressure
Node aks-nodepool1-35288426-vmss000001 status is now: NodeHasNoDiskPressure
(x2)
default
kubelet
aks-nodepool1-35288426-vmss000001
aks-nodepool1-35288426-vmss000001
NodeHasSufficientPID
Node aks-nodepool1-35288426-vmss000001 status is now: NodeHasSufficientPID
default
node-controller
aks-nodepool1-35288426-vmss000001
RegisteredNode
Node aks-nodepool1-35288426-vmss000001 event: Registered Node aks-nodepool1-35288426-vmss000001 in Controller
Container image "mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.6.0" already present on machine
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
csi-azuredisk-node-s7kx7
Pulled
Container image "mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.6.0" already present on machine
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
cloud-node-manager-p25v4
Created
Created container cloud-node-manager
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
cloud-node-manager-p25v4
Pulled
Container image "mcr.microsoft.com/oss/kubernetes/azure-cloud-node-manager:v1.25.5" already present on machine
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
csi-azuredisk-node-s7kx7
Created
Created container liveness-probe
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
cloud-node-manager-p25v4
Started
Started container cloud-node-manager
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
csi-azuredisk-node-s7kx7
Started
Started container liveness-probe
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
csi-azuredisk-node-s7kx7
Pulled
Container image "mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.5.0" already present on machine
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
csi-azuredisk-node-s7kx7
Created
Created container node-driver-registrar
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
csi-azuredisk-node-s7kx7
Started
Started container node-driver-registrar
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
csi-azuredisk-node-s7kx7
Pulled
Container image "mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.26.3" already present on machine
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
csi-azuredisk-node-s7kx7
Created
Created container azuredisk
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
csi-azuredisk-node-s7kx7
Started
Started container azuredisk
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
csi-azurefile-node-5q2kg
Started
Started container liveness-probe
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
csi-azurefile-node-5q2kg
Pulled
Container image "mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.5.0" already present on machine
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
csi-azurefile-node-5q2kg
Created
Created container node-driver-registrar
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
csi-azurefile-node-5q2kg
Started
Started container node-driver-registrar
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
csi-azurefile-node-5q2kg
Pulled
Container image "mcr.microsoft.com/oss/kubernetes-csi/azurefile-csi:v1.24.0" already present on machine
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
csi-azurefile-node-5q2kg
Created
Created container azurefile
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
csi-azurefile-node-5q2kg
Started
Started container azurefile
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
kube-proxy-cx28f
Created
Created container kube-proxy-bootstrap
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
kube-proxy-cx28f
Pulled
Successfully pulled image "mcr.microsoft.com/mirror/docker/library/busybox:1.35" in 1.088565804s (1.088569904s including waiting)
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
kube-proxy-cx28f
Started
Started container kube-proxy-bootstrap
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
kube-proxy-cx28f
Started
Started container kube-proxy
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
kube-proxy-cx28f
Created
Created container kube-proxy
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
kube-proxy-cx28f
Pulled
Container image "mcr.microsoft.com/oss/kubernetes/kube-proxy:v1.25.6" already present on machine
kube-system
daemonset-controller
kube-proxy
SuccessfulCreate
Created pod: kube-proxy-fmv8z
kube-system
daemonset-controller
csi-azuredisk-node
SuccessfulCreate
Created pod: csi-azuredisk-node-4k7kw
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
konnectivity-agent-7cf7879f5f-ngrcw
Created
Created container konnectivity-agent
(x8)
default
kubelet
aks-nodepool1-35288426-vmss000000
aks-nodepool1-35288426-vmss000000
NodeHasSufficientMemory
Node aks-nodepool1-35288426-vmss000000 status is now: NodeHasSufficientMemory
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
konnectivity-agent-7cf7879f5f-ngrcw
Pulled
Container image "mcr.microsoft.com/oss/kubernetes/apiserver-network-proxy/agent:v0.0.33-hotfix.20221110" already present on machine
kube-system
daemonset-controller
cloud-node-manager
SuccessfulCreate
Created pod: cloud-node-manager-fjc25
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
konnectivity-agent-7cf7879f5f-ngrcw
Started
Started container konnectivity-agent
kube-system
default-scheduler
csi-azuredisk-node-4k7kw
Scheduled
Successfully assigned kube-system/csi-azuredisk-node-4k7kw to aks-nodepool1-35288426-vmss000000
kube-system
daemonset-controller
csi-azurefile-node
SuccessfulCreate
Created pod: csi-azurefile-node-jc5q4
kube-system
default-scheduler
cloud-node-manager-fjc25
Scheduled
Successfully assigned kube-system/cloud-node-manager-fjc25 to aks-nodepool1-35288426-vmss000000
kube-system
default-scheduler
kube-proxy-fmv8z
Scheduled
Successfully assigned kube-system/kube-proxy-fmv8z to aks-nodepool1-35288426-vmss000000
kube-system
default-scheduler
csi-azurefile-node-jc5q4
Scheduled
Successfully assigned kube-system/csi-azurefile-node-jc5q4 to aks-nodepool1-35288426-vmss000000
kube-system
default-scheduler
konnectivity-agent-7cf7879f5f-fn7xh
FailedScheduling
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
kube-system
default-scheduler
konnectivity-agent-7cf7879f5f-ngrcw
Scheduled
Successfully assigned kube-system/konnectivity-agent-7cf7879f5f-ngrcw to aks-nodepool1-35288426-vmss000001
default
node-controller
aks-nodepool1-35288426-vmss000000
RegisteredNode
Node aks-nodepool1-35288426-vmss000000 event: Registered Node aks-nodepool1-35288426-vmss000000 in Controller
kube-system
default-scheduler
konnectivity-agent-7cf7879f5f-fn7xh
FailedScheduling
0/2 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
csi-azurefile-node-jc5q4
Started
Started container liveness-probe
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
csi-azurefile-node-jc5q4
Pulled
Container image "mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.5.0" already present on machine
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
csi-azuredisk-node-4k7kw
Failed
Error: services have not yet been read at least once, cannot construct envvars
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
csi-azuredisk-node-4k7kw
Created
Created container liveness-probe
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
csi-azurefile-node-jc5q4
Pulled
Container image "mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.6.0" already present on machine
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
csi-azurefile-node-jc5q4
Created
Created container liveness-probe
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
csi-azuredisk-node-4k7kw
Failed
Error: services have not yet been read at least once, cannot construct envvars
Scaled up replica set cilium-operator-58fd795dc9 to 1
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
cilium-node-init-xdgnt
Pulled
Successfully pulled image "quay.io/cilium/startup-script:d69851597ea019af980891a4628fb36b7880ec26" in 1.467616939s (1.467722842s including waiting)
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
cilium-node-init-xdgnt
Created
Created container node-init
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
cilium-node-init-xdgnt
Started
Started container node-init
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
cilium-kjsxv
Pulled
Successfully pulled image "quay.io/cilium/cilium:v1.13.2@sha256:85708b11d45647c35b9288e0de0706d24a5ce8a378166cadc700f756cc1a38d6" in 10.404988171s (11.85803939s including waiting)
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
cilium-kjsxv
Created
Created container config
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
cilium-4gf9v
Pulled
Successfully pulled image "quay.io/cilium/cilium:v1.13.2@sha256:85708b11d45647c35b9288e0de0706d24a5ce8a378166cadc700f756cc1a38d6" in 12.128864655s (12.128994558s including waiting)
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
cilium-4gf9v
Created
Created container config
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
cilium-kjsxv
Started
Started container config
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
cilium-4gf9v
Started
Started container config
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
cilium-4gf9v
Pulled
Container image "quay.io/cilium/cilium:v1.13.2@sha256:85708b11d45647c35b9288e0de0706d24a5ce8a378166cadc700f756cc1a38d6" already present on machine
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
cilium-4gf9v
Created
Created container mount-cgroup
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
cilium-4gf9v
Started
Started container mount-cgroup
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
cilium-kjsxv
Started
Started container mount-cgroup
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
cilium-kjsxv
Created
Created container mount-cgroup
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
cilium-kjsxv
Pulled
Container image "quay.io/cilium/cilium:v1.13.2@sha256:85708b11d45647c35b9288e0de0706d24a5ce8a378166cadc700f756cc1a38d6" already present on machine
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
cilium-node-init-7qdmr
Pulled
Successfully pulled image "quay.io/cilium/startup-script:d69851597ea019af980891a4628fb36b7880ec26" in 6.620648541s (18.735682988s including waiting)
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
cilium-4gf9v
Created
Created container apply-sysctl-overwrites
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
cilium-kjsxv
Created
Created container apply-sysctl-overwrites
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
cilium-kjsxv
Pulled
Container image "quay.io/cilium/cilium:v1.13.2@sha256:85708b11d45647c35b9288e0de0706d24a5ce8a378166cadc700f756cc1a38d6" already present on machine
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
cilium-4gf9v
Pulled
Container image "quay.io/cilium/cilium:v1.13.2@sha256:85708b11d45647c35b9288e0de0706d24a5ce8a378166cadc700f756cc1a38d6" already present on machine
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
cilium-node-init-7qdmr
Started
Started container node-init
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
cilium-4gf9v
Started
Started container apply-sysctl-overwrites
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
cilium-kjsxv
Started
Started container apply-sysctl-overwrites
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
cilium-node-init-7qdmr
Created
Created container node-init
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
cilium-4gf9v
Pulled
Container image "quay.io/cilium/cilium:v1.13.2@sha256:85708b11d45647c35b9288e0de0706d24a5ce8a378166cadc700f756cc1a38d6" already present on machine
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
cilium-operator-58fd795dc9-zwxgn
Pulled
Successfully pulled image "quay.io/cilium/operator-generic:v1.13.2@sha256:a1982c0a22297aaac3563e428c330e17668305a41865a842dec53d241c5490ab" in 8.327672933s (19.88534373s including waiting)
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
cilium-4gf9v
Created
Created container mount-bpf-fs
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
cilium-4gf9v
Started
Started container mount-bpf-fs
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
cilium-operator-58fd795dc9-zwxgn
Started
Started container cilium-operator
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
cilium-operator-58fd795dc9-zwxgn
Created
Created container cilium-operator
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
cilium-kjsxv
Created
Created container mount-bpf-fs
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
cilium-4gf9v
Started
Started container wait-for-node-init
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
cilium-kjsxv
Pulled
Container image "quay.io/cilium/cilium:v1.13.2@sha256:85708b11d45647c35b9288e0de0706d24a5ce8a378166cadc700f756cc1a38d6" already present on machine
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
cilium-kjsxv
Started
Started container mount-bpf-fs
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
cilium-4gf9v
Pulled
Container image "quay.io/cilium/cilium:v1.13.2@sha256:85708b11d45647c35b9288e0de0706d24a5ce8a378166cadc700f756cc1a38d6" already present on machine
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
cilium-4gf9v
Created
Created container wait-for-node-init
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
cilium-kjsxv
Started
Started container wait-for-node-init
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
cilium-4gf9v
Created
Created container clean-cilium-state
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
cilium-4gf9v
Pulled
Container image "quay.io/cilium/cilium:v1.13.2@sha256:85708b11d45647c35b9288e0de0706d24a5ce8a378166cadc700f756cc1a38d6" already present on machine
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
cilium-kjsxv
Created
Created container wait-for-node-init
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
cilium-kjsxv
Pulled
Container image "quay.io/cilium/cilium:v1.13.2@sha256:85708b11d45647c35b9288e0de0706d24a5ce8a378166cadc700f756cc1a38d6" already present on machine
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
cilium-4gf9v
Started
Started container clean-cilium-state
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
cilium-kjsxv
Created
Created container clean-cilium-state
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
cilium-kjsxv
Pulled
Container image "quay.io/cilium/cilium:v1.13.2@sha256:85708b11d45647c35b9288e0de0706d24a5ce8a378166cadc700f756cc1a38d6" already present on machine
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
cilium-4gf9v
Started
Started container install-cni-binaries
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
cilium-4gf9v
Created
Created container install-cni-binaries
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
cilium-4gf9v
Pulled
Container image "quay.io/cilium/cilium:v1.13.2@sha256:85708b11d45647c35b9288e0de0706d24a5ce8a378166cadc700f756cc1a38d6" already present on machine
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
cilium-kjsxv
Started
Started container clean-cilium-state
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
cilium-kjsxv
Started
Started container install-cni-binaries
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
cilium-4gf9v
Pulled
Container image "quay.io/cilium/cilium:v1.13.2@sha256:85708b11d45647c35b9288e0de0706d24a5ce8a378166cadc700f756cc1a38d6" already present on machine
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
cilium-4gf9v
Started
Started container cilium-agent
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
cilium-4gf9v
Created
Created container cilium-agent
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
cilium-kjsxv
Created
Created container install-cni-binaries
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
cilium-kjsxv
Pulled
Container image "quay.io/cilium/cilium:v1.13.2@sha256:85708b11d45647c35b9288e0de0706d24a5ce8a378166cadc700f756cc1a38d6" already present on machine
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
cilium-kjsxv
Pulled
Container image "quay.io/cilium/cilium:v1.13.2@sha256:85708b11d45647c35b9288e0de0706d24a5ce8a378166cadc700f756cc1a38d6" already present on machine
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
cilium-kjsxv
Created
Created container cilium-agent
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
cilium-kjsxv
Started
Started container cilium-agent
kube-system
default-scheduler
metrics-server-6454f667df-9mfff
Scheduled
Successfully assigned kube-system/metrics-server-6454f667df-9mfff to aks-nodepool1-35288426-vmss000000
kube-system
default-scheduler
coredns-75bbfcbc66-8mgwm
Scheduled
Successfully assigned kube-system/coredns-75bbfcbc66-8mgwm to aks-nodepool1-35288426-vmss000000
kube-system
default-scheduler
metrics-server-6454f667df-z5dq8
Scheduled
Successfully assigned kube-system/metrics-server-6454f667df-z5dq8 to aks-nodepool1-35288426-vmss000000
kube-system
default-scheduler
coredns-autoscaler-6f8964bbf7-sdj2g
Scheduled
Successfully assigned kube-system/coredns-autoscaler-6f8964bbf7-sdj2g to aks-nodepool1-35288426-vmss000000
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
coredns-75bbfcbc66-8mgwm
Pulled
Container image "mcr.microsoft.com/oss/kubernetes/coredns:v1.9.3" already present on machine
Successfully pulled image "mcr.microsoft.com/oss/kubernetes/autoscaler/cluster-proportional-autoscaler:1.8.5.2" in 1.401417044s (1.401528646s including waiting)
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
coredns-75bbfcbc66-vbm22
Pulled
Container image "mcr.microsoft.com/oss/kubernetes/coredns:v1.9.3" already present on machine
kube-system
default-scheduler
coredns-75bbfcbc66-vbm22
Scheduled
Successfully assigned kube-system/coredns-75bbfcbc66-vbm22 to aks-nodepool1-35288426-vmss000001
Successfully pulled image "quay.io/cilium/hubble-relay:v1.13.2@sha256:51b772cab0724511583c3da3286439791dc67d7c35077fa30eaba3b5d555f8f4" in 1.79327225s (1.79327845s including waiting)
(x2)
default
kubelet
aks-nodepool1-35288426-vmss000001
aks-nodepool1-35288426-vmss000001
NodeReady
Node aks-nodepool1-35288426-vmss000001 status is now: NodeReady
kube-system
taint-controller
coredns-75bbfcbc66-vbm22
TaintManagerEviction
Cancelling deletion of Pod kube-system/coredns-75bbfcbc66-vbm22
kube-system
taint-controller
metrics-server-565c76bfcf-kml6q
TaintManagerEviction
Cancelling deletion of Pod kube-system/metrics-server-565c76bfcf-kml6q
kube-system
taint-controller
metrics-server-565c76bfcf-xtxf9
TaintManagerEviction
Cancelling deletion of Pod kube-system/metrics-server-565c76bfcf-xtxf9
kube-system
taint-controller
konnectivity-agent-7cf7879f5f-ngrcw
TaintManagerEviction
Cancelling deletion of Pod kube-system/konnectivity-agent-7cf7879f5f-ngrcw
cilium-test
default-scheduler
client2-78f748dd67-2hddq
Scheduled
Successfully assigned cilium-test/client2-78f748dd67-2hddq to aks-nodepool1-35288426-vmss000001
cilium-test
default-scheduler
client-7b78db77d5-dh72r
Scheduled
Successfully assigned cilium-test/client-7b78db77d5-dh72r to aks-nodepool1-35288426-vmss000001
cilium-test
replicaset-controller
client2-78f748dd67
SuccessfulCreate
Created pod: client2-78f748dd67-2hddq
cilium-test
deployment-controller
client2
ScalingReplicaSet
Scaled up replica set client2-78f748dd67 to 1
cilium-test
replicaset-controller
client-7b78db77d5
SuccessfulCreate
Created pod: client-7b78db77d5-dh72r
cilium-test
deployment-controller
echo-same-node
ScalingReplicaSet
Scaled up replica set echo-same-node-56445df7f5 to 1
cilium-test
deployment-controller
client
ScalingReplicaSet
Scaled up replica set client-7b78db77d5 to 1
cilium-test
default-scheduler
echo-same-node-56445df7f5-nnn2p
FailedScheduling
0/2 nodes are available: 2 node(s) didn't match pod affinity rules. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.
cilium-test
replicaset-controller
echo-same-node-56445df7f5
SuccessfulCreate
Created pod: echo-same-node-56445df7f5-nnn2p
cilium-test
default-scheduler
echo-other-node-5b4c56d689-zckr6
Scheduled
Successfully assigned cilium-test/echo-other-node-5b4c56d689-zckr6 to aks-nodepool1-35288426-vmss000000
Successfully assigned cilium-test/echo-same-node-56445df7f5-nnn2p to aks-nodepool1-35288426-vmss000001
cilium-test
kubelet
aks-nodepool1-35288426-vmss000001
client2-78f748dd67-2hddq
Created
Created container client2
cilium-test
kubelet
aks-nodepool1-35288426-vmss000001
client-7b78db77d5-dh72r
Started
Started container client
cilium-test
kubelet
aks-nodepool1-35288426-vmss000001
client2-78f748dd67-2hddq
Pulled
Successfully pulled image "quay.io/cilium/alpine-curl:v1.6.0@sha256:408430f548a8390089b9b83020148b0ef80b0be1beb41a98a8bfe036709c196e" in 397.164091ms (2.013813908s including waiting)
cilium-test
kubelet
aks-nodepool1-35288426-vmss000001
client-7b78db77d5-dh72r
Created
Created container client
cilium-test
kubelet
aks-nodepool1-35288426-vmss000001
client-7b78db77d5-dh72r
Pulled
Successfully pulled image "quay.io/cilium/alpine-curl:v1.6.0@sha256:408430f548a8390089b9b83020148b0ef80b0be1beb41a98a8bfe036709c196e" in 1.656483197s (1.656570899s including waiting)
cilium-test
kubelet
aks-nodepool1-35288426-vmss000001
client2-78f748dd67-2hddq
Started
Started container client2
cilium-test
kubelet
aks-nodepool1-35288426-vmss000000
echo-other-node-5b4c56d689-zckr6
Pulled
Successfully pulled image "quay.io/cilium/json-mock:v1.3.5@sha256:d5dfd0044540cbe01ad6a1932cfb1913587f93cac4f145471ca04777f26342a4" in 9.750683467s (9.750688867s including waiting)
cilium-test
kubelet
aks-nodepool1-35288426-vmss000001
echo-same-node-56445df7f5-nnn2p
Created
Created container echo-same-node
cilium-test
kubelet
aks-nodepool1-35288426-vmss000001
echo-same-node-56445df7f5-nnn2p
Pulled
Successfully pulled image "quay.io/cilium/json-mock:v1.3.5@sha256:d5dfd0044540cbe01ad6a1932cfb1913587f93cac4f145471ca04777f26342a4" in 8.748137249s (9.514237478s including waiting)
Successfully pulled image "docker.io/coredns/coredns:1.10.0@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955" in 2.409856448s (2.409864748s including waiting)
cilium-test
kubelet
aks-nodepool1-35288426-vmss000000
echo-other-node-5b4c56d689-zckr6
Started
Started container dns-test-server
cilium-test
kubelet
aks-nodepool1-35288426-vmss000000
echo-other-node-5b4c56d689-zckr6
Created
Created container dns-test-server
cilium-test
kubelet
aks-nodepool1-35288426-vmss000001
echo-same-node-56445df7f5-nnn2p
Started
Started container dns-test-server
cilium-test
kubelet
aks-nodepool1-35288426-vmss000001
echo-same-node-56445df7f5-nnn2p
Pulled
Successfully pulled image "docker.io/coredns/coredns:1.10.0@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955" in 3.510890991s (3.510896191s including waiting)
cilium-test
kubelet
aks-nodepool1-35288426-vmss000001
echo-same-node-56445df7f5-nnn2p
Created
Created container dns-test-server
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
metrics-server-6454f667df-z5dq8
Killing
Stopping container metrics-server-vpa
kube-system
deployment-controller
metrics-server
ScalingReplicaSet
Scaled down replica set metrics-server-6454f667df to 0 from 1
Timeout when running plugin "/etc/node-problem-detector.d/plugin/check_redeploy.sh": state - signal: killed. output - ""
kube-system
replicaset-controller
konnectivity-agent-56dff4c758
SuccessfulCreate
Created pod: konnectivity-agent-56dff4c758-2jlzh
kube-system
replicaset-controller
konnectivity-agent-7cf7879f5f
SuccessfulDelete
Deleted pod: konnectivity-agent-7cf7879f5f-fn7xh
kube-system
deployment-controller
konnectivity-agent
ScalingReplicaSet
Scaled down replica set konnectivity-agent-7cf7879f5f to 1 from 2
kube-system
deployment-controller
konnectivity-agent
ScalingReplicaSet
Scaled up replica set konnectivity-agent-56dff4c758 to 1 from 0
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
konnectivity-agent-7cf7879f5f-fn7xh
Killing
Stopping container konnectivity-agent
kube-system
default-scheduler
konnectivity-agent-56dff4c758-2jlzh
FailedScheduling
0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
konnectivity-agent-56dff4c758-2jlzh
Pulled
Container image "mcr.microsoft.com/oss/kubernetes/apiserver-network-proxy/agent:v0.0.33-hotfix.20221110" already present on machine
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
konnectivity-agent-56dff4c758-2jlzh
Created
Created container konnectivity-agent
kube-system
default-scheduler
konnectivity-agent-56dff4c758-2jlzh
Scheduled
Successfully assigned kube-system/konnectivity-agent-56dff4c758-2jlzh to aks-nodepool1-35288426-vmss000000
kube-system
kubelet
aks-nodepool1-35288426-vmss000000
konnectivity-agent-56dff4c758-2jlzh
Started
Started container konnectivity-agent
kube-system
default-scheduler
konnectivity-agent-56dff4c758-4zkn4
FailedScheduling
0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.
kube-system
replicaset-controller
konnectivity-agent-56dff4c758
SuccessfulCreate
Created pod: konnectivity-agent-56dff4c758-4zkn4
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
konnectivity-agent-7cf7879f5f-ngrcw
Killing
Stopping container konnectivity-agent
kube-system
replicaset-controller
konnectivity-agent-7cf7879f5f
SuccessfulDelete
Deleted pod: konnectivity-agent-7cf7879f5f-ngrcw
kube-system
deployment-controller
konnectivity-agent
ScalingReplicaSet
Scaled down replica set konnectivity-agent-7cf7879f5f to 0 from 1
kube-system
deployment-controller
konnectivity-agent
ScalingReplicaSet
Scaled up replica set konnectivity-agent-56dff4c758 to 2 from 1
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
konnectivity-agent-56dff4c758-4zkn4
Created
Created container konnectivity-agent
kube-system
default-scheduler
konnectivity-agent-56dff4c758-4zkn4
Scheduled
Successfully assigned kube-system/konnectivity-agent-56dff4c758-4zkn4 to aks-nodepool1-35288426-vmss000001
kube-system
kubelet
aks-nodepool1-35288426-vmss000001
konnectivity-agent-56dff4c758-4zkn4
Pulled
Container image "mcr.microsoft.com/oss/kubernetes/apiserver-network-proxy/agent:v0.0.33-hotfix.20221110" already present on machine