Skip to content
This repository has been archived by the owner on Sep 30, 2020. It is now read-only.

Avoid the 51200 bytes limitation errors of CloudFormation #38

Closed
mumoshu opened this issue Nov 7, 2016 · 11 comments · Fixed by #45
Closed

Avoid the 51200 bytes limitation errors of CloudFormation #38

mumoshu opened this issue Nov 7, 2016 · 11 comments · Fixed by #45
Milestone

Comments

@mumoshu
Copy link
Contributor

mumoshu commented Nov 7, 2016

With the current master 6255751, when creating a test cluster for E2e testing, I've finally hit the limitation error:

*your template body here* at 'templateBody' failed to satisfy constraint: Member must have length less than or equal to 51200
@mumoshu
Copy link
Contributor Author

mumoshu commented Nov 7, 2016

Quick thought, but to not break the current usage and user experience of kube-aws up, I'm not specifically opposed to automatically putting a cloudformation template on S3 before creating the stack in kube-aws up, at least for now.

Also, I believe that doing so is an AWS-official way to go beyond the limit.

We can also split CloudFormation templates into several parts, possibly into the etcd, the controller and the worker part. I'm not sure this can be done shortly though.

Anybody have any ideas, thoughts, etc?

@mumoshu
Copy link
Contributor Author

mumoshu commented Nov 7, 2016

ok it turns out that @pieterlange had already mentioned this way of avoiding the limitation on last friday 👍
#29 (comment)

@mumoshu
Copy link
Contributor Author

mumoshu commented Nov 7, 2016

Writing randomly to organize my idea 😃

Note that, even if we revert the last few commits to reduce template size, it will be an issue sooner or later.
Also, regardless of how we reduce "minimum" template size like that, users can always add their own customizations to cloud-config-(worker|controller|etcd), stack-template.json or etc freely, which may or may not end up with the same limitation errors.

@pieterlange
Copy link
Contributor

I agree that for the short term we shouldn't break UX and upload the template to S3.

Mid to long term i think there's no way around having to kick off multiple cloudformation templates, especially once we start looking at node pools (coreos/coreos-kubernetes#667).

Starting multiple templates also involves a lot more work tracking resource id's across templates. I'm currently splitting my kube deployments in VPC, etcd and kube-aws cloudformation components, utilizing Outputs to glue them together.

@iwarp
Copy link

iwarp commented Nov 7, 2016

Nested stacks are also an option, however it does get more complicated passing id's between templates.
I've had to use s3 every time i've used cloudformation.

@cknowles
Copy link
Contributor

cknowles commented Nov 8, 2016

The new cross stack references are probably going to be helpful to avoid keeping track of Outputs. Plus I think we should swap to the YAML version of the CloudFormation as it's more readable and since it can be significantly smaller it may actually delay having to tackle this issue fully. I have some experience with both of those for either a pull or review.

@mumoshu mumoshu added this to the v0.9.1-rc.2 milestone Nov 8, 2016
@mumoshu
Copy link
Contributor Author

mumoshu commented Nov 8, 2016

Thanks everyone for your valuable feedbacks 🙇

To keep the ball rolling, I've submitted one possibly way of resolving this to #45

Do you like it or prefer:

  • separate templates
  • separate templates + cross stack reference
  • nested stack
  • combination of the aboves

in the short term or the long term?

Any feedback is welcome 🙏

@mumoshu
Copy link
Contributor Author

mumoshu commented Nov 10, 2016

@iwarp @c-knowles May I ask you how do you usually decide which one to use, nested stacks or cross stack references?

I'll probably end up using one of those while building a POC for #46 but not sure which one to go.

@cknowles
Copy link
Contributor

Since they're quite different patterns sometimes you'd have to use one or the other. For example, say I wanted to link a single RDS layer to two stacks, then I'd have to use the stack references (or Outputs doing it the old way).

I tend to avoid nested stacks because it requires the additional complexity of uploading to S3.

The main difference from the perspective of updates is that a stack that uses nested stacks will only receive updates that are within that nested stack on its own next update. Whereas cross stack references mean more of an immediate update since they are completely separate entities.

mumoshu added a commit to mumoshu/kube-aws that referenced this issue Nov 10, 2016
… avoid the 51200 bytes limitation errors of CloudFormation

If you are hit by the cloudformation limit, `kube-aws up` now fail with a specific error explaining the limit and how to work-around it:

```
$ kube-aws up
Creating AWS resources. This should take around 5 minutes.
Error: Error creating cluster: stack-template.json size(=51673) exceeds the 51200 bytes limit of cloudformation. `--s3-uri s3://<bucket>/path/to/dir` must be specified to upload it to S3 beforehand
...

As the error message says, if you provide the `--s3-uri` option composed of a s3 bucket's name and a path to directory, `kube-aws` uploads your template to s3 if necessary:

```
$ kube-aws up --s3-uri s3://mybucket/mydir
```

resolves kubernetes-retired#38 (for now)
mumoshu added a commit that referenced this issue Nov 10, 2016
…y avoid the 51200 bytes limitation errors of CloudFormation (#45)

feat: Add the `--s3-uri s3://<bucket>/<directory` flag to automatically avoid the 51200 bytes limitation errors of CloudFormation

If you are hit by the cloudformation limit, `kube-aws up`, `kube-aws update`, `kube-aws validate` now fail with a specific error explaining the limit and how to work-around it:

```
$ kube-aws up
Creating AWS resources. This should take around 5 minutes.
Error: Error creating cluster: stack-template.json size(=51673) exceeds the 51200 bytes limit of cloudformation. `--s3-uri s3://<bucket>/path/to/dir` must be specified to upload it to S3 beforehand
...

As the error message says, if you provide the `--s3-uri` option composed of a s3 bucket's name and a path to directory, `kube-aws` uploads your template to s3 if necessary:

```
$ kube-aws up --s3-uri s3://mybucket/mydir
```

Also note that `--s3-uri` accepts URIs without directories, too.
Both `--s3-uri s3://mybucket` and `s3-uri s3://mybucket/mydir` are valid.

The E2E test script now include the cluster updating test with `kube-aws update`.

resolves #38 (for now)
davidmccormick pushed a commit to HotelsDotCom/kube-aws that referenced this issue Jul 18, 2018
…/0.9.10-rc2-networking-serviceaccounts-rolling to hcom-flavour

* commit '97c66309d17b8337c269f9049b52a63b2baded12':
  remove clash
  The return of Maxim's service account key generation
  Update the copy to use /bin/sh
  Install cni binaries for legacy flannel install
  Correct syntax
  update in line with upstream pull request
  Explicitly do a create after the delete
  Handle kubectl failures by delete+add
  Correct apiserver not binding to 0.0.0.0 issue
  Add GPU support for kubernetes 1.9+ using device plugins (kubernetes-retired#1222)
  Missed another reference
  Remove reference to legacy config
@Tarvinder91
Copy link

Adding my 2 cents for people following the issue, you shall store your template on s3 first and use the following option in the aws cmd line as a workaround for large templates -

--template-url https://s3-.amazonaws.com/...../yourtemplate.json

url should be https link of the template.

@mumoshu
Copy link
Contributor Author

mumoshu commented Nov 13, 2018

Thanks for the info! Yeah, and that's what's kube-aws doing for you :)

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
5 participants