-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When using examples/helmster.yaml for testing, the updater component experienced a panic, with the panic message being runtime error: invalid memory address or nil pointer dereference #6808
Comments
/area vertical-pod-autoscaler |
It seems as though the mutatingwebhookconfigurations isn't configured in your setup. |
Yes, but in the API declaration, spec UpdatePolicy is an optional configuration and defaults to auto. I think if it is an optional configuration, then I can choose not to configure it. According to the current logic, if it is not configured, it cannot run. |
Right, that is fair. |
@adrianmoisey Can you help me review the code? I don't see any response from PR |
Unfortunately I'm not a reviewer, so I can't approve it. |
Version 1.1.2 has been released including a fix to this issue. Thanks everyone! |
we see same issue in 1.2.5
|
Which component are you using?:
Vertical Pod Autoscaler
What version of the component are you using?:
1.1.1
Component version:
What k8s version are you using (
kubectl version
)?:Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.1", GitCommit:"3ddd0f45aa91e2f30c70734b175631bec5b5825a", GitTreeState:"clean", BuildDate:"2022-05-24T12:17:11Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"darwin/amd64"}
Kustomize Version: v4.5.4
Server Version: version.Info{Major:"1", Minor:"29", GitVersion:"v1.29.2", GitCommit:"4b8e819355d791d96b7e9d9efe4cbafae2311c88", GitTreeState:"clean", BuildDate:"2024-02-14T22:24:00Z", GoVersion:"go1.21.7", Compiler:"gc", Platform:"linux/amd64"}
kubectl version
OutputWhat environment is this in?: local(kind created)
What did you expect to happen?:
The updater component scales vertically based on the suggestions generated by the recommender
What happened instead?:
updater panic,
E0509 03:44:19.458556 1 api.go:153] fail to get pod controller: pod=etcd-ha-control-plane3 err=Unhandled targetRef v1 / Node / ha-control-plane3, last error node is not a valid owner
E0509 03:44:19.458587 1 api.go:153] fail to get pod controller: pod=etcd-ha-control-plane err=Unhandled targetRef v1 / Node / ha-control-plane, last error node is not a valid owner
E0509 03:44:19.458864 1 api.go:153] fail to get pod controller: pod=kube-apiserver-ha-control-plane2 err=Unhandled targetRef v1 / Node / ha-control-plane2, last error node is not a valid owner
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x159129f]
goroutine 1 [running]:
k8s.io/autoscaler/vertical-pod-autoscaler/pkg/updater/priority.(*scalingDirectionPodEvictionAdmission).LoopInit(0xc000566538, {0x1a1dda3?, 0xa?, 0x40b?}, 0xc0002ad5c0)
/gopath/src/k8s.io/autoscaler/vertical-pod-autoscaler/pkg/updater/priority/scaling_direction_pod_eviction_admission.go:111 +0x11f
k8s.io/autoscaler/vertical-pod-autoscaler/pkg/updater/logic.(*updater).RunOnce(0xc0003342c0, {0x1c97290, 0xc0001741c0})
/gopath/src/k8s.io/autoscaler/vertical-pod-autoscaler/pkg/updater/logic/updater.go:183 +0xb44
main.main()
/gopath/src/k8s.io/autoscaler/vertical-pod-autoscaler/pkg/updater/main.go:127 +0x7ef
How to reproduce it (as minimally and precisely as possible):
./hack/vpa-up.sh
kubectl create -f examples/hamster.yaml
Anything else we need to know?:
The text was updated successfully, but these errors were encountered: