-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[aws-eks] Remove support for non-kubectlEnabled clusters #9332
Comments
While I understand that the situation is confusing for users, this is unfortunately one of our own making.
I work with customers and users who are already surprised by the default behavior of the Cluster constructor and its children. Their expectation is that when you create a cluster, the principal that created the stack will be able to log in with kubectl thereafter, just as though you used raw CloudFormation, Making matters worse is that the attribute that addresses their issue is called While I recognize the utility of having a custom resource that creates EKS clusters--particularly to paper over deficiencies with CF's current inability to provision private endpoints--I think we must be careful not to unnecessarily disturb existing user expectations during the course of gaining some functionality improvements. Not everybody needs private endpoints, and some would be happy to sacrifice them for getting the expected behavior back. Consider, too, that these CF deficiencies will be addressed in the coming months, and we may very well be able to deprecate the custom resource and go back to the CF resource provider. |
I think the current name of this property and the behavior of how the default masters role is set up in the L2 created lots of confusion and is one of the reasons we need to address this holistically (see #9463). I think that additionally to removing I believe this will likely achieve the desired result:
I think that before we fully remove support for the CFN backed L2 we can offer an alternative entry point (eg |
When specifying `kubectlEnabled: false`, it _implicitly_ meant that the underlying resource behind the construct would be the stock `AWS::EKS::Cluster` resource instead of the custom resource used by default. This means that many new capabilities of EKS would not be supported (e.g. Fargate profiles). Clusters backed by the custom-resource have all the capabilities (and more) of clusters backed by `AWS::EKS::Cluster`. Therefore, we decided that going forward we are going to support only the custom-resource backed solution. To that end, after this change, defining an `eks.Cluster` with `kubectlEnabled: false` will throw an error with the following message: The "eks.Cluster" class no longer allows disabling kubectl support. As a temporary workaround, you can use the drop-in replacement class `eks.LegacyCluster` but bear in mind that this class will soon be removed and will no longer receive additional features or bugfixes. See #9332 for more details Resolves #9332 BREAKING CHANGE: The experimental `eks.Cluster` construct no longer supports setting `kubectlEnabled: false`. A temporary drop-in alternative is `eks.LegacyCluster`, but we have plans to completely remove support for it in an upcoming release since `eks.Cluster` has matured and should provide all the needed capabilities. Please comment on #9332 if there are use cases that are not supported by `eks.Cluster`.
Related: #9463 |
When specifying `kubectlEnabled: false`, it _implicitly_ meant that the underlying resource behind the construct would be the stock `AWS::EKS::Cluster` resource instead of the custom resource used by default. This means that many new capabilities of EKS would not be supported (e.g. Fargate profiles). Clusters backed by the custom-resource have all the capabilities (and more) of clusters backed by `AWS::EKS::Cluster`. Therefore, we decided that going forward we are going to support only the custom-resource backed solution. To that end, after this change, defining an `eks.Cluster` with `kubectlEnabled: false` will throw an error with the following message: The "eks.Cluster" class no longer allows disabling kubectl support. As a temporary workaround, you can use the drop-in replacement class `eks.LegacyCluster` but bear in mind that this class will soon be removed and will no longer receive additional features or bugfixes. See #9332 for more details Resolves #9332 BREAKING CHANGE: The experimental `eks.Cluster` construct no longer supports setting `kubectlEnabled: false`. A temporary drop-in alternative is `eks.LegacyCluster`, but we have plans to completely remove support for it in an upcoming release since `eks.Cluster` has matured and should provide all the needed capabilities. Please comment on #9332 if there are use cases that are not supported by `eks.Cluster`. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
When specifying `kubectlEnabled: false`, it _implicitly_ meant that the underlying resource behind the construct would be the stock `AWS::EKS::Cluster` resource instead of the custom resource used by default. This means that many new capabilities of EKS would not be supported (e.g. Fargate profiles). Clusters backed by the custom-resource have all the capabilities (and more) of clusters backed by `AWS::EKS::Cluster`. Therefore, we decided that going forward we are going to support only the custom-resource backed solution. To that end, after this change, defining an `eks.Cluster` with `kubectlEnabled: false` will throw an error with the following message: The "eks.Cluster" class no longer allows disabling kubectl support. As a temporary workaround, you can use the drop-in replacement class `eks.LegacyCluster` but bear in mind that this class will soon be removed and will no longer receive additional features or bugfixes. See #9332 for more details Resolves #9332 BREAKING CHANGE: The experimental `eks.Cluster` construct no longer supports setting `kubectlEnabled: false`. A temporary drop-in alternative is `eks.LegacyCluster`, but we have plans to completely remove support for it in an upcoming release since `eks.Cluster` has matured and should provide all the needed capabilities. Please comment on #9332 if there are use cases that are not supported by `eks.Cluster`. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
I actively use this. I'm ok with this 👍 |
This breaks |
@kaylanm Could you explain why? What in this change affects the layer? |
This change makes the layer mandatory. The layer isn't published in SAR for all partitions, and there is no way to specify the ARN of your own SAR publication, so it's not possible to use this construct in that case. |
I see. So this means that in regions where this layer isn't published, one would now use I agree that since we plan on removing |
Currently when instantiating an EKS cluster, a user can configure
kubectlEnabled: false
:This will cause the resource to fallback to its pure
CfnCluster
implementation, which is more limited than the implementation with the custom resource.Lets remove this code path and simply have all clusters be implemented with a custom resource, and thus all with
kubectlEnabled: true
by default.Use Case
Supporting both scenarios creates a big maintenance burden and some strange quirks. For example, private cluster endpoint access is not supported by
CfnCluster
, but its not strictly related to the ability to runkubectl
commands, so forcing the user to enable kubectl in order to configure private endpoint access is un natural.On the other hand, supporting this feature for the pure
CfnCluster
use-case requires a big investment without good cause. I believe this code deviation is just a consequence of development cycles, and not necessarily intentional. There really is no reason why customers would opt to only use the pureCfnCluster
.Proposed Solution
Simply drop the
kubectlEnabled
flag, and use the custom resource implementation for all resources.Other
This is a 🚀 Feature Request
The text was updated successfully, but these errors were encountered: