-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cli: Use existing S3 bucket for cdk bootstrap #3684
Comments
Is there any progress on this? The current "create bucket" model fails due to security issues in our environment. The best workaround seems to be creating the bucket manually up-front. |
A coworker recently talked to some aws developers about cdk. They said that the bootstrapping process was unlikely to change much due to the large potential impact to existing projects. Can anyone close to the project comment on whether the above solution is acceptable in light of that comment? Or would it be considered breaking to an unacceptable degree? Edit: I've been digging around in the bootstrapping code, and noticed a previous commit by @eladb referenced having removed dependence on the CDK in this code due to a cyclic dependency. I assume that means I can't use the s3 module to solve this problem? Apologies in advance for any stupid questions. I'm a relative noob when it comes to node projects and a total noob in the cdk project. |
+1 We have the same security limitation as mentioned above. Please allow to use an existing bucket during bootstrap! |
+1 same IAM limitation. Maybe we can add a bucket ARN in cdk.json? |
+1 same IAM limitation. |
I was able to work around this and use an existing bucket with my own CloudFormation template outside of the bootstrap command. The key is that the stack you use to create your bucket needs to have two outputs:
For example, a simple CFN template AWSTemplateFormatVersion: "2010-09-09"
Resources:
StagingBucket:
Type: AWS::S3::Bucket
Outputs:
BucketName:
Value: !Ref StagingBucket
BucketDomainName:
Value: !GetAtt StagingBucket.RegionalDomainName With the Outputs set you can pass the stack name to cdk when deploying with
Now the existing bucket will be used rather than creating a brand new bucket. If the bucket already exists outside of a CloudFormation stack you can use the import functionality of CloudFormation to import the existing bucket into a CloudFormation stack as described here but you won't be able to add Outputs during the import. What you can do is run bootstrap against the new stack name with the imported resources and that will update the stack with the required Outputs.
With that set you can then run deploy pointing at this imported stack name. Hopefully that helps until there's an enhancement. |
@joseph-behrens Do you mind if I adapt those instructions for use in the AWS CDK Developer Guide? We can definitely use more detail around bootstrapping there. |
@jerry-aws Not at all! Go for it. |
@joseph-behrens and @jerry-aws, the only problem with supplying the whole stack yourself is that, in the long term, you'll be forced to always keep the resources in your custom stack in sync with what the CDK is expecting to be present. Probably not a big deal right now, but it might be down the road if more resources are added to the default stack that you don't wish to administer yourself. It might just be worth mentioning that in the docs. |
@joseph-behrens' strategy works well! A note: if you name your created stack |
I have this issue too. I'm allowed to use Cloudfront in development but not S3 buckets. With two stacks, one for ElasticSearch and another for Cloudfront, the first uses an eu region, the second has to deploy to us-east-1 because of Cloudfront and lambda@edge, but fails as I have not bootstrapped in us-east-1. I don't have permission to create buckets in us-east-1. How could the second stack instead use the bootstrapped S3 bucket from the first stack in an eu region. Do you have an example in cdk code, not cloudformation. The API doc's were no clear for me. |
There are many requests for customization of the built-in bootstrapping template. Rather than implementing each and every request, it's more productive to allow users to help themselves. This change introduces two new flags to `cdk bootstrap`: * `cdk bootstrap --show-template`: prints the current template to stdout, which people can pipe to a file. * `cdk bootstrap --template FILE`: reads the template from a file instead of using the built-in template. This can be used to arbitrarily customize the bootstrapping template for use in any organization. I know that the documentation changes in this PR are pretty light, but really a Developer Guide topic should be written on bootstrapping, which is next on my TODO list. Resolves #9256, resolves #8724, resolves #3684, resolves #1528, necessary for #9681. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
I know this issue is already closed & I'm a little late to the party, but an idiomatic solution to this issue would be to use a changed StackSynthesizer. You can customize the Example in Python: MyStack(app, "MyStack",
synthesizer=DefaultStackSynthesizer(
file_assets_bucket_name="my-orgs-asset-bucket"
)
) |
@xcalizorz https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.DefaultStackSynthesizer.html
|
Note: for support questions, please first reference our documentation, then use Stackoverflow. This repository's issues are intended for feature requests and bug reports.
I'm submitting a ...
What is the current behavior?
Currently cdk bootstrap with the --toolkit-bucket-name argument creates an S3 bucket if it doesn't exist. If the bucket does exist, it doesn't attempt to use the bucket, instead it returns a CF error as it tries to create the bucket.
What is the expected behavior (or behavior of feature suggested)?
Expected behaviour is that, if the bucket exists, it should attempt to try and use the bucket for storing the CDK data. If the bucket does not exist, it should create the given bucket.
What is the motivation / use case for changing the behavior or adding this feature?
In large organizations spanning multiple accounts, it makes sense to leverage the use of a single bucket to both allow expiry and management of temporary artifacts and metadata. Here, with multiple stacks, there is no option to achieve the above and instead we have to rely on multiple buckets being created across multiple accounts for the same purpose.
Please tell us about your environment:
Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. associated pull-request, stackoverflow, gitter, etc)
The text was updated successfully, but these errors were encountered: