Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[aws-eks] Remove support for non-kubectlEnabled clusters #9332

Closed
1 of 2 tasks
iliapolo opened this issue Jul 29, 2020 · 8 comments · Fixed by #9454
Closed
1 of 2 tasks

[aws-eks] Remove support for non-kubectlEnabled clusters #9332

iliapolo opened this issue Jul 29, 2020 · 8 comments · Fixed by #9454
Assignees
Labels
@aws-cdk/aws-eks Related to Amazon Elastic Kubernetes Service effort/small Small work item – less than a day of effort feature-request A feature should be added or improved. p1

Comments

@iliapolo
Copy link
Contributor

Currently when instantiating an EKS cluster, a user can configure kubectlEnabled: false:

new eks.Cluster(this, 'Cluster', {
  kubectlEnabled: false
});

This will cause the resource to fallback to its pure CfnCluster implementation, which is more limited than the implementation with the custom resource.

Lets remove this code path and simply have all clusters be implemented with a custom resource, and thus all with kubectlEnabled: true by default.

Use Case

Supporting both scenarios creates a big maintenance burden and some strange quirks. For example, private cluster endpoint access is not supported by CfnCluster, but its not strictly related to the ability to run kubectl commands, so forcing the user to enable kubectl in order to configure private endpoint access is un natural.

On the other hand, supporting this feature for the pure CfnCluster use-case requires a big investment without good cause. I believe this code deviation is just a consequence of development cycles, and not necessarily intentional. There really is no reason why customers would opt to only use the pure CfnCluster.

Proposed Solution

Simply drop the kubectlEnabled flag, and use the custom resource implementation for all resources.

Other

  • 👋 I may be able to implement this feature request
  • ⚠️ This feature might incur a breaking change

This is a 🚀 Feature Request

@iliapolo iliapolo added feature-request A feature should be added or improved. effort/small Small work item – less than a day of effort labels Jul 29, 2020
@iliapolo iliapolo added this to the EKS Dev Preview milestone Jul 29, 2020
@iliapolo iliapolo self-assigned this Jul 29, 2020
@iliapolo iliapolo added p1 @aws-cdk/aws-eks Related to Amazon Elastic Kubernetes Service labels Jul 29, 2020
@otterley
Copy link
Contributor

otterley commented Aug 4, 2020

While I understand that the situation is confusing for users, this is unfortunately one of our own making.

There really is no reason why customers would opt to only use the pure CfnCluster.

I work with customers and users who are already surprised by the default behavior of the Cluster constructor and its children. Their expectation is that when you create a cluster, the principal that created the stack will be able to log in with kubectl thereafter, just as though you used raw CloudFormation, eksctl, or aws eks create-cluster from the command line. When they find they can't do that, they are very frustrated and raise support questions. I've had to tell several customers and colleagues over the course of the past months how to make things work the way they expect.

Making matters worse is that the attribute that addresses their issue is called kubectlEnabled, whose name implies a consequence that is untrue (setting it to false doesn't make the cluster inaccessible via kubectl). Arguably it should have been called manageMasterRole.

While I recognize the utility of having a custom resource that creates EKS clusters--particularly to paper over deficiencies with CF's current inability to provision private endpoints--I think we must be careful not to unnecessarily disturb existing user expectations during the course of gaining some functionality improvements. Not everybody needs private endpoints, and some would be happy to sacrifice them for getting the expected behavior back. Consider, too, that these CF deficiencies will be addressed in the coming months, and we may very well be able to deprecate the custom resource and go back to the CF resource provider.

@eladb
Copy link
Contributor

eladb commented Aug 4, 2020

I think the current name of this property and the behavior of how the default masters role is set up in the L2 created lots of confusion and is one of the reasons we need to address this holistically (see #9463).

I think that additionally to removing kubectlEnabled which technically implies that we are not using AWS::EKS::Cluster under the hood but rather our custom resource, we should also change the masters role default trust policy to allow assumption by anyone in the account (with the right permissions).

I believe this will likely achieve the desired result:

  1. A simple new eks.Cluster() will be accessible from anyone that has permissions to assume the masters role (the command line that's outputted after cdk deploy should just work)
  2. All users will use the custom resource backed cluster so they can enjoy its current and future benefits.

I think that before we fully remove support for the CFN backed L2 we can offer an alternative entry point (eg eks.LegacyCluster or something like that.

@eladb eladb assigned eladb and unassigned iliapolo Aug 5, 2020
@eladb eladb changed the title [@aws-eks] Remove support for non kubectl enabled cluster [@aws-eks] Remove support for non-kubectlEnabled clusters Aug 5, 2020
eladb pushed a commit that referenced this issue Aug 5, 2020
When specifying `kubectlEnabled: false`, it _implicitly_ meant that the underlying resource behind the construct would be the stock `AWS::EKS::Cluster` resource instead of the custom resource used by default. This means that many new capabilities of EKS would not be supported (e.g. Fargate profiles).

Clusters backed by the custom-resource have all the capabilities (and more) of clusters backed by `AWS::EKS::Cluster`. Therefore, we decided that going forward we are going to support only the custom-resource backed solution.

To that end, after this change, defining an `eks.Cluster` with `kubectlEnabled: false` will throw an error with the following message:

    The "eks.Cluster" class no longer allows disabling kubectl support.
    As a temporary workaround, you can use the drop-in replacement class `eks.LegacyCluster`
    but bear in mind that this class will soon be removed and will no longer receive additional
    features or bugfixes. See #9332 for more details

Resolves #9332

BREAKING CHANGE: The experimental `eks.Cluster` construct no longer supports setting `kubectlEnabled: false`. A temporary drop-in alternative is `eks.LegacyCluster`, but we have plans to completely remove support for it in an upcoming release since `eks.Cluster` has matured and should provide all the needed capabilities. Please comment on #9332 if there are use cases that are not supported by `eks.Cluster`.
@eladb
Copy link
Contributor

eladb commented Aug 5, 2020

Related: #9463

@SomayaB SomayaB added the in-progress This issue is being actively worked on. label Aug 5, 2020
@mergify mergify bot closed this as completed in #9454 Aug 6, 2020
mergify bot pushed a commit that referenced this issue Aug 6, 2020
When specifying `kubectlEnabled: false`, it _implicitly_ meant that the underlying resource behind the construct would be the stock `AWS::EKS::Cluster` resource instead of the custom resource used by default. This means that many new capabilities of EKS would not be supported (e.g. Fargate profiles).

Clusters backed by the custom-resource have all the capabilities (and more) of clusters backed by `AWS::EKS::Cluster`. Therefore, we decided that going forward we are going to support only the custom-resource backed solution.

To that end, after this change, defining an `eks.Cluster` with `kubectlEnabled: false` will throw an error with the following message:

    The "eks.Cluster" class no longer allows disabling kubectl support.
    As a temporary workaround, you can use the drop-in replacement class `eks.LegacyCluster`
    but bear in mind that this class will soon be removed and will no longer receive additional
    features or bugfixes. See #9332 for more details

Resolves #9332

BREAKING CHANGE: The experimental `eks.Cluster` construct no longer supports setting `kubectlEnabled: false`. A temporary drop-in alternative is `eks.LegacyCluster`, but we have plans to completely remove support for it in an upcoming release since `eks.Cluster` has matured and should provide all the needed capabilities. Please comment on #9332 if there are use cases that are not supported by `eks.Cluster`.


----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
eladb pushed a commit that referenced this issue Aug 10, 2020
When specifying `kubectlEnabled: false`, it _implicitly_ meant that the underlying resource behind the construct would be the stock `AWS::EKS::Cluster` resource instead of the custom resource used by default. This means that many new capabilities of EKS would not be supported (e.g. Fargate profiles).

Clusters backed by the custom-resource have all the capabilities (and more) of clusters backed by `AWS::EKS::Cluster`. Therefore, we decided that going forward we are going to support only the custom-resource backed solution.

To that end, after this change, defining an `eks.Cluster` with `kubectlEnabled: false` will throw an error with the following message:

    The "eks.Cluster" class no longer allows disabling kubectl support.
    As a temporary workaround, you can use the drop-in replacement class `eks.LegacyCluster`
    but bear in mind that this class will soon be removed and will no longer receive additional
    features or bugfixes. See #9332 for more details

Resolves #9332

BREAKING CHANGE: The experimental `eks.Cluster` construct no longer supports setting `kubectlEnabled: false`. A temporary drop-in alternative is `eks.LegacyCluster`, but we have plans to completely remove support for it in an upcoming release since `eks.Cluster` has matured and should provide all the needed capabilities. Please comment on #9332 if there are use cases that are not supported by `eks.Cluster`.


----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
@pgollucci
Copy link

pgollucci commented Aug 10, 2020

I actively use this. I'm ok with this 👍

@iliapolo iliapolo removed the in-progress This issue is being actively worked on. label Aug 16, 2020
@iliapolo iliapolo changed the title [@aws-eks] Remove support for non-kubectlEnabled clusters [aws-eks] Remove support for non-kubectlEnabled clusters Aug 16, 2020
@kaylanm
Copy link
Contributor

kaylanm commented Aug 21, 2020

This breaks eks.Cluster for partitions other than aws and aws-cn until KubectlLayer is present there or #7992 is resolved.

@iliapolo
Copy link
Contributor Author

@kaylanm Could you explain why? What in this change affects the layer?

@kaylanm
Copy link
Contributor

kaylanm commented Aug 23, 2020

@kaylanm Could you explain why? What in this change affects the layer?

This change makes the layer mandatory. The layer isn't published in SAR for all partitions, and there is no way to specify the ARN of your own SAR publication, so it's not possible to use this construct in that case.

@iliapolo
Copy link
Contributor Author

I see. So this means that in regions where this layer isn't published, one would now use eks.LegacyCluster instead of using eks.Cluster with kubectlEnabled: false.

I agree that since we plan on removing eks.LegacyCluster eventually, this issue needs to be resolved. Taking this into consideration - Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
@aws-cdk/aws-eks Related to Amazon Elastic Kubernetes Service effort/small Small work item – less than a day of effort feature-request A feature should be added or improved. p1
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants