-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(aws-eks): Make sure kubectl is compatible with the k8s version #15736
Comments
Hi @otaviomacedo, Based on my Slack conversation w/ @rix0rrr on #20000 here:
EKS Would it be feasible for CDK to create an official kubectl lambda layer(s) and deploy them in each region (or reference the official ones if they exist), and then reference them based on the EKS cluster version selection. Just brainstorming here... Would this be an overkill? Would this even fall under CDK to manage, or standalone project? Is this too much to ask for from CDK itself? Or CDK should not go in that direction? ... etc LMKWYT |
cc-ing @pahud to see if he has any thoughts on this. Thanks! |
Is there a status update on this? Any reason why there can't be a different layer version per EKS version? AWS only supports a given EKS version for 16 months. 1.20 is end of support in approximately 3 months. Due to the rolling nature of EKS, CDK needs to support matching versions for kubectl. Once 1.20 is EOS, the control plane will automatically migrate to the next supported version breaking the use of kubectl either way. |
I think we probably have to dynamically create the
And according to this versions mapping, if we select cluster version That being said, we probably should extend the lambda-layer-kubectl construct first. any thoughts? |
Really seems crazy there's nothing on the roadmap for this yet. |
Def 2 first to unblock people and give yourselves some breathing space to rethink 1 |
Isn't possible to retrieve the version of the passed cluster using AWS API and use the appropriate kubectl version for the cluster? |
If/when this is picked up is there any chance this will also be implemented in CDK v1? |
release date of eks 1.22 is April 4, 2022, 1.23 is planned for August, but we still can not upgrade using cdk |
We've been brainstorming a possible solution in #20596, but no definitive PR yet. |
With Kubernetes 1.24 recently released, 1.21 is no longer a supported Kubernetes release! https://kubernetes.io/releases/patch-releases/#non-active-branch-history This has now turned into a serious (security) problem for anyone wanting to provision production grade clusters via CDK. |
@memark Amazon EKS 1.21 is supported through February 2023 per the official documentation. |
@otterley Thanks, that's good information! I was not aware that AWS backported security patches. Then we're safe until February. |
AWS EKS 1.23 is available, but we can not upgrade even to 1.22 using CDK :( |
I was so sick and tired of this problem, that I adapted this workaround for our environment and have been running and maintaining several 1.22 Clusters with this approach. So far, I can recommend it, it's not a lot of overhead imho. Will try 1.23 soon. |
Have solved this problem in the best way. Imported EKS to Terraform. |
Now that kubectlLayer is supported, I believe this issue should be closed. |
|
Currently, to update Kubernetes resources, we use
kubectl
version 1.20, hardcoded in a lambda layer. For now, this works fine, but eventually, as new Kubernetes versions are released, they won't be compatible with the client anymore. Also, it's not an option to always use the latest version of the client, as we know thatkubectl
version 1.21.0 is not compatible with version 1.20 (and lower) of the server.The text was updated successfully, but these errors were encountered: