-
Notifications
You must be signed in to change notification settings - Fork 206
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
1.6.0 to 1.6.1 upgrade breaking changes: AWS::KMS::Key used for secret encryption can not be replaced #644
Comments
@shapirov103 ping |
@bnaydenov thanks for bringing it up to our attention, I unfortunately was out for some time, will look into and address shortly. |
@aliaksei-ivanou - looks like the issue is caused by the refactoring of the kms key provider. Can you look into it for a workaround that enables continuity for clusters provisioned before 1.6.1? |
Sorry for the issue. The associated code changes were meant to address the KMS key id confusion issue - you couldn't create more than one key in the same pattern because the key id was hardcoded. We also separated the create and lookup KMS key capabilities into two separate providers. To solve the issue in this PR, I added an optional KMS keyId parameter that allows to use an existing key. |
I've had this same issue after upgrading. This is part of a broader issue with changing how logical ids are generated. Any change to a construct's id across (blueprint) versions will have very serious consequences on existing deployments. In this case, changing the key's id will render the existing (physical) KMS key orphaned (it will not be physically deleted, because the retention policy is RETAIN by default, but the CF stack won't have any knowledge of it anymore), a new key will be created, and associated with the cluster's secrets encryption. That last step will fail, because the encryption key of a cluster cannot be changed after creation. EksBlueprint.builder()
.account(process.env.CDK_DEFAULT_ACCOUNT)
.region(process.env.CDK_DEFAULT_REGION)
.resourceProvider(GlobalResources.KmsKey, new class implements ResourceProvider<kms.IKey> {
provide(context: ResourceContext): kms.IKey {
return kms.Key.fromKeyArn(context.scope, 'eks-secrets-existing-encryption-key', EXISTING_KEY_ARN)
}
}) You can find the existing key in the KMS console, with the description However, once you My suggestion would be that future changes that will involve changing how ids are generated, are either rejected, or considered breaking changes and an upgrade guide be provided for people wanting to upgrade to the new version.
Is there any way to test for this kind of breaking changes in CI? For example, deploy a blueprint with an existing version, upgrade to the commit to be tested, and run TBH, it is not clear to me how the workaround suggested by @aliaksei-ivanou should be used. |
@Feder1co5oave i am not suggesting workaround, you probably mean workaround and PR from @aliaksei-ivanou |
That's right, you can use the |
@Feder1co5oave I will add an item for backwards compatibility tests to be included on the CI at least as part of the release. It most likely won't happen until July. Such tests are a bit more expensive to run. |
I believe that should fix it 9fae44b |
FYI, I had upgraded to 1.6.1 and deployed the stack on a non-production environment, making my cluster's KMS key an orphan. After great effort, I was able to re-import my KMS key into the stack with the previous logical id, by using the experimental My suggestion to all: don't do this. I had to jerry-rig a couple of code fixes in both eks-blueprints and the CDK CLI itself. Either rollback your stack right after the upgrade to 1.6.1 and recover your key (I'm not even sure this works?) or wait until the workaround is released. |
This lead me to a rollback failed, how did you proceed from that? edit: was able to continue rollback while ignoring the cluster, ran deploy again on 1.7.0, looks like its working |
Thanks, @praveenperera - I will keep the issue for a bit to collect more feedback. Happy to apply additional changes if needed. |
I tried using both the Am I using them wrong? Thanks for the help. Code lines: |
@yoshi23 the approach that we used was to make the identifier of the created KMS key identical between the versions. So if migrating from 1.6.0 to 1.7.0 you don't need anything explicit to register as a resource provider, it should just work. If you want to be explicit about it, then it will be equivalent of using Changing to lookup providers will result in failure, since CFN stack expects the key to be in the current state management (i.e. created). |
Thank you for the quick reply @aliaksei-ivanou and @shapirov103! I confirm that I'm able to deploy clusters without specifying the key with [email protected] and [email protected]. The issue for me was that I had multiple clusters, some deployed before and some after 1.6.1. The error came up with one of them as I tried to run a CDK deploy from CI/CD and it failed with the encryption change error. Finding this issue and fix here, I assumed that the solution would be to force provide the existing KMS key everywhere but that lead to a lot of inconsistencies between the clusters, which I couldn't really entangle anymore. (I ended up deleting the clusters and re-deploying with 1.7.0 - they were pre-production, so I found that easier) Thank you for your work on the Blueprint framework, it is a great tool! |
Assuming resolved, we can re-open if needed. |
Describe the bug
Having exciting eks cluster provisioned with cdk blueprint
1.6.0
& aws cdk2.66.1
and using.useDefaultSecretEncryption(true)
which is set by default if not present, fails duringcdk deploy
when cdk blueprint is upgraded to1.6.1
& aws cdk2.72.0
with following error:CloudWatch log:
Logs: /aws/lambda/eks-cluster-name-awscdk-OnEventHandler42BEBAE0-tg5FN0dg7Szg
is having this error:This is output of
cdk diff
which show default AWS KMS key need to be destroyed and recreated.Look this issue which describes
cannot change EKS cluster encryption config
aws-cloudformation/cloudformation-coverage-roadmap#1234
Expected Behavior
Upgrading cdk blueprint from
1.6.0
& aws cdk2.66.1
to1.6.1
& aws cdk2.72.0
should upgrade without problem exciting eks cluster created with default setting for.useDefaultSecretEncryption(true)
Current Behavior
Running
cdk deploy
fails:CloudWatch log:
Logs: /aws/lambda/eks-cluster-name-awscdk-OnEventHandler42BEBAE0-tg5FN0dg7Szg
is having this error:Reproduction Steps
Install eks cluster using cdk blueprint
1.6.0
and aws cdk2.66.1
and then try to upgrade cdk blueprint to1.6.1
and aws cdk to2.72.0
and try to runcdk deploy
will reproduce thisPossible Solution
No response
Additional Information/Context
No response
CDK CLI Version
2.72.0 (build 397e46c)
EKS Blueprints Version
1.6.1
Node.js Version
v16.17.0
Environment details (OS name and version, etc.)
Mac OS 12.6 Monterey - M1 Max
Other information
No response
The text was updated successfully, but these errors were encountered: