-
Notifications
You must be signed in to change notification settings - Fork 294
Avoid the 51200 bytes limitation errors of CloudFormation #38
Comments
Quick thought, but to not break the current usage and user experience of Also, I believe that doing so is an AWS-official way to go beyond the limit. We can also split CloudFormation templates into several parts, possibly into the etcd, the controller and the worker part. I'm not sure this can be done shortly though. Anybody have any ideas, thoughts, etc? |
ok it turns out that @pieterlange had already mentioned this way of avoiding the limitation on last friday 👍 |
Writing randomly to organize my idea 😃 Note that, even if we revert the last few commits to reduce template size, it will be an issue sooner or later. |
I agree that for the short term we shouldn't break UX and upload the template to S3. Mid to long term i think there's no way around having to kick off multiple cloudformation templates, especially once we start looking at node pools (coreos/coreos-kubernetes#667). Starting multiple templates also involves a lot more work tracking resource id's across templates. I'm currently splitting my kube deployments in VPC, etcd and kube-aws cloudformation components, utilizing Outputs to glue them together. |
Nested stacks are also an option, however it does get more complicated passing id's between templates. |
The new cross stack references are probably going to be helpful to avoid keeping track of Outputs. Plus I think we should swap to the YAML version of the CloudFormation as it's more readable and since it can be significantly smaller it may actually delay having to tackle this issue fully. I have some experience with both of those for either a pull or review. |
Thanks everyone for your valuable feedbacks 🙇 To keep the ball rolling, I've submitted one possibly way of resolving this to #45 Do you like it or prefer:
in the short term or the long term? Any feedback is welcome 🙏 |
@iwarp @c-knowles May I ask you how do you usually decide which one to use, nested stacks or cross stack references? I'll probably end up using one of those while building a POC for #46 but not sure which one to go. |
Since they're quite different patterns sometimes you'd have to use one or the other. For example, say I wanted to link a single RDS layer to two stacks, then I'd have to use the stack references (or Outputs doing it the old way). I tend to avoid nested stacks because it requires the additional complexity of uploading to S3. The main difference from the perspective of updates is that a stack that uses nested stacks will only receive updates that are within that nested stack on its own next update. Whereas cross stack references mean more of an immediate update since they are completely separate entities. |
… avoid the 51200 bytes limitation errors of CloudFormation If you are hit by the cloudformation limit, `kube-aws up` now fail with a specific error explaining the limit and how to work-around it: ``` $ kube-aws up Creating AWS resources. This should take around 5 minutes. Error: Error creating cluster: stack-template.json size(=51673) exceeds the 51200 bytes limit of cloudformation. `--s3-uri s3://<bucket>/path/to/dir` must be specified to upload it to S3 beforehand ... As the error message says, if you provide the `--s3-uri` option composed of a s3 bucket's name and a path to directory, `kube-aws` uploads your template to s3 if necessary: ``` $ kube-aws up --s3-uri s3://mybucket/mydir ``` resolves kubernetes-retired#38 (for now)
…y avoid the 51200 bytes limitation errors of CloudFormation (#45) feat: Add the `--s3-uri s3://<bucket>/<directory` flag to automatically avoid the 51200 bytes limitation errors of CloudFormation If you are hit by the cloudformation limit, `kube-aws up`, `kube-aws update`, `kube-aws validate` now fail with a specific error explaining the limit and how to work-around it: ``` $ kube-aws up Creating AWS resources. This should take around 5 minutes. Error: Error creating cluster: stack-template.json size(=51673) exceeds the 51200 bytes limit of cloudformation. `--s3-uri s3://<bucket>/path/to/dir` must be specified to upload it to S3 beforehand ... As the error message says, if you provide the `--s3-uri` option composed of a s3 bucket's name and a path to directory, `kube-aws` uploads your template to s3 if necessary: ``` $ kube-aws up --s3-uri s3://mybucket/mydir ``` Also note that `--s3-uri` accepts URIs without directories, too. Both `--s3-uri s3://mybucket` and `s3-uri s3://mybucket/mydir` are valid. The E2E test script now include the cluster updating test with `kube-aws update`. resolves #38 (for now)
…/0.9.10-rc2-networking-serviceaccounts-rolling to hcom-flavour * commit '97c66309d17b8337c269f9049b52a63b2baded12': remove clash The return of Maxim's service account key generation Update the copy to use /bin/sh Install cni binaries for legacy flannel install Correct syntax update in line with upstream pull request Explicitly do a create after the delete Handle kubectl failures by delete+add Correct apiserver not binding to 0.0.0.0 issue Add GPU support for kubernetes 1.9+ using device plugins (kubernetes-retired#1222) Missed another reference Remove reference to legacy config
Adding my 2 cents for people following the issue, you shall store your template on s3 first and use the following option in the aws cmd line as a workaround for large templates - --template-url https://s3-.amazonaws.com/...../yourtemplate.json url should be https link of the template. |
Thanks for the info! Yeah, and that's what's kube-aws doing for you :) |
With the current master 6255751, when creating a test cluster for E2e testing, I've finally hit the limitation error:
The text was updated successfully, but these errors were encountered: