-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cant install VPA with Helm Configuration #5758
Comments
Hi @nikolic-milan |
Hi @Shubham82, thank you for your answer! |
I'm sorry, we can install the VPA, but it is for cluster Autoscaler, The error you are facing is due to the version you are using. This feature was merged in CA v1.27 so you should use CA v12.7 and the corresponding k8s version i.e. k8s v1.27. |
Hi @gjtempleton and @voelzmo, is anything I missed here? PTAL! |
AWS EKS currently support Kubernetes up to version 1.26. Would that be a high enough version? |
I think it is due to the EKS version. |
I'll try and update my cluster to v1.26 since it's the latest supported version by AWS EKS. |
Yes, it was added in CA v1.27.
This is only to mention CA compatibility with k8s. i.e. minor release of the CA to the minor release of k8s. |
Sorry, this is slightly confusing and due to the fact we publish multiple projects from the same repo, it's not related to the CA version. The CA helm chart now gives the option to install a VPA object to scale the CA as appropriate, however, you need to have already installed the VPA to the cluster separately. We could obviously do with improving the docs around this to make this clearer. |
Thanks, @gjtempleton for the above information. But to use this feature we should have CA-1.27, am I right?
We can modify/add this thing to VPA content under README. |
/kind documentation |
No, it's unrelated to the version of the CA, it should work with any version of the CA as it's not related to CA code, it's just a helm chart feature, available in any release of the chart from |
I have checked it, you are right I missed it. It makes sense thanks for the clarification. |
@gjtempleton, Your thoughts on this? |
I'm up for it. |
Fixed it here: #5763 |
Which component are you using?: cluster-autoscaler
What version of the component are you using?:
Component version: Latest Version
What k8s version are you using (
kubectl version
)?: 1.25kubectl version
OutputWhat environment is this in?: AWS EKS Kubernetes Version 1.24
What did you expect to happen?: I expected a successful installation of autoscaler and VPA
What happened instead?: I got the fallowing error
unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "VerticalPodAutoscaler" in version "autoscaling.k8s.io/v1"\n
How to reproduce it (as minimally and precisely as possible): I used the fallowing code with aws_cdk
Anything else we need to know?: Without the VPA configuration, autoscaler installs.
The text was updated successfully, but these errors were encountered: