You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I0111 17:19:07.763679 1 drain.go:157] All pods removed from xxxxxxxxx
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x40 pc=0x3cbe750]
goroutine 20363521 [running]:
k8s.io/autoscaler/cluster-autoscaler/core/scaledown/actuation.deleteNodesFromCloudProvider(0x40ac6659a8?, {0x40b8e1c328?, 0x1, 0x1})
/gopath/src/k8s.io/autoscaler/cluster-autoscaler/core/scaledown/actuation/delete_in_batch.go:153 +0x130
k8s.io/autoscaler/cluster-autoscaler/core/scaledown/actuation.(*NodeDeletionBatcher).deleteNodesAndRegisterStatus(0x401a342bc0, {0x40b8e1c328?, 0x1, 0x0?}, 0x0?)
/gopath/src/k8s.io/autoscaler/cluster-autoscaler/core/scaledown/actuation/delete_in_batch.go:88 +0x3c
created by k8s.io/autoscaler/cluster-autoscaler/core/scaledown/actuation.(*NodeDeletionBatcher).AddNodes
/gopath/src/k8s.io/autoscaler/cluster-autoscaler/core/scaledown/actuation/delete_in_batch.go:74 +0xbc
How to reproduce it (as minimally and precisely as possible):
After deciding to delete a node, losing nodes due to cloud vendors. Especially spot instance can be deleted at any time.
Anything else we need to know?:
Before the panic I saw the instance terminating event(due to health check) on the aws console. In fact, the instance is preempted.
The text was updated successfully, but these errors were encountered:
Which component are you using?:
cluster-autoscaler
What version of the component are you using?:
Component version: v1.28.0
What k8s version are you using (
kubectl version
)?:kubectl version
OutputWhat environment is this in?:
aws, aws cloud provider.
What did you expect to happen?:
delete node, no panic.
What happened instead?:
panic:
How to reproduce it (as minimally and precisely as possible):
After deciding to delete a node, losing nodes due to cloud vendors. Especially spot instance can be deleted at any time.
Anything else we need to know?:
Before the panic I saw the instance terminating event(due to health check) on the aws console. In fact, the instance is preempted.
The text was updated successfully, but these errors were encountered: