forked from kubernetes-retired/kube-aws
-
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
fix kubernetes-retired#110 kubernetes-retired#120 kubernetes-retired#135 kubernetes-retired#138 kubernetes-retired#119
- Loading branch information
Showing
10 changed files
with
454 additions
and
191 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,8 @@ | ||
## Destroy the cluster | ||
|
||
When you are done with your cluster, run `kube-aws node-pools destroy` and `kube-aws destroy` then all cluster components will be destroyed. | ||
|
||
If you created any node pool, you must delete these first by running `kube-aws node-pools destroy`, or `kube-aws destroy` will end up failing because node pools still references | ||
AWS resources managed by the main cluster. | ||
|
||
If you created any Kubernetes Services of type `LoadBalancer`, you must delete these first, as the CloudFormation cannot be fully destroyed if any externally-managed resources still exist. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,20 @@ | ||
# Known Limitations | ||
|
||
## hostPort doesn't work | ||
|
||
This isn't really an issue of kube-aws but rather Kubernetes and/or CNI issue. | ||
Anyways, it doesn't work if `hostNetwork: false`. | ||
|
||
If you want to deploy `nginx-ingress-controller` which requires `hostPort`, just set `hostNetwork: true`: | ||
|
||
``` | ||
spec: | ||
hostNetwork: true | ||
containers: | ||
- image: gcr.io/google_containers/nginx-ingress-controller:0.8.3 | ||
name: nginx-ingress-lb | ||
``` | ||
|
||
Relevant kube-aws issue: [does hostPort not work on kube-aws/CoreOS?](https://github.com/coreos/kube-aws/issues/91) | ||
|
||
See [the upstream issue](https://github.com/kubernetes/kubernetes/issues/23920#issuecomment-254918942) for more information. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,156 @@ | ||
# Node Pool | ||
|
||
Node Pool allows you to bring up additional pools of worker nodes each with a separate configuration including: | ||
|
||
* Instance Type | ||
* Storage Type/Size/IOPS | ||
* Instance Profile | ||
* Additional, User-Provided Security Group(s) | ||
* Spot Price | ||
* AWS service to manage your EC2 instances: [Auto Scaling](http://docs.aws.amazon.com/autoscaling/latest/userguide/WhatIsAutoScaling.html) or [Spot Fleet](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-fleet.html) | ||
* [Node labels](http://kubernetes.io/docs/user-guide/node-selection/) | ||
* [Taints](https://github.com/kubernetes/kubernetes/issues/17190) | ||
|
||
## Deploying a Multi-AZ cluster with cluster-autoscaler support with Node Pools | ||
|
||
Edit the `cluster.yaml` file to decrease `workerCount`, which is meant to be number of worker nodes in the "main" cluster, down to zero: | ||
|
||
```yaml | ||
workerCount: 0 | ||
subnets: | ||
- availabilityZone: us-west-1a | ||
instanceCIDR: "10.0.0.0/24" | ||
``` | ||
Update the main cluster to catch up changes made in `cluster.yaml`: | ||
|
||
``` | ||
$ kube-aws update \ | ||
--s3-uri s3://<my-bucket>/<optional-prefix> | ||
``` | ||
Create two node pools, each with a different subnet and an availability zone: | ||
``` | ||
$ kube-aws node-pools init --node-pool-name first-pool-in-1a \ | ||
--availability-zone us-west-1a \ | ||
--key-name ${KUBE_AWS_KEY_NAME} \ | ||
--kms-key-arn ${KUBE_AWS_KMS_KEY_ARN} | ||
|
||
$ kube-aws node-pools init --node-pool-name second-pool-in-1b \ | ||
--availability-zone us-west-1a \ | ||
--key-name ${KUBE_AWS_KEY_NAME} \ | ||
--kms-key-arn ${KUBE_AWS_KMS_KEY_ARN} | ||
``` | ||
Edit the `cluster.yaml` for the first zone: | ||
``` | ||
$ $EDITOR node-pools/first-pool-in-1a/cluster.yaml | ||
``` | ||
```yaml | ||
workerCount: 1 | ||
subnets: | ||
- availabilityZone: us-west-1a | ||
instanceCIDR: "10.0.1.0/24" | ||
``` | ||
|
||
Edit the `cluster.yaml` for the second zone: | ||
|
||
``` | ||
$ $EDITOR node-pools/second-pool-in-1b/cluster.yaml | ||
``` | ||
|
||
```yaml | ||
workerCount: 1 | ||
subnets: | ||
- availabilityZone: us-west-1b | ||
instanceCIDR: "10.0.2.0/24" | ||
``` | ||
Launch the node pools: | ||
``` | ||
$ kube-aws node-pools up --node-pool-name first-pool-in-1a \ | ||
--s3-uri s3://<my-bucket>/<optional-prefix> | ||
|
||
$ kube-aws node-pools up --node-pool-name second-pool-in-1b \ | ||
--s3-uri s3://<my-bucket>/<optional-prefix> | ||
``` | ||
|
||
Deployment of cluster-autoscaler is currently out of scope of this documentation. | ||
Please read [cluster-autoscaler's documentation](https://github.com/kubernetes/contrib/blob/master/cluster-autoscaler/cloudprovider/aws/README.md) for instructions on it. | ||
|
||
## Customizing min/max size of the auto scaling group | ||
|
||
If you've chosen to power your worker nodes in a node pool with an auto scaling group, you can customize `MinSize`, `MaxSize`, `MinInstancesInService` in `cluster.yaml`: | ||
|
||
Please read [the AWS documentation](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-group.html#aws-properties-as-group-prop) for more information on `MinSize`, `MaxSize`, `MinInstancesInService` for ASGs. | ||
|
||
``` | ||
worker: | ||
# Auto Scaling Group definition for workers. If only `workerCount` is specified, min and max will be the set to that value and `rollingUpdateMinInstancesInService` will be one less. | ||
autoScalingGroup: | ||
minSize: 1 | ||
maxSize: 3 | ||
rollingUpdateMinInstancesInService: 2 | ||
``` | ||
|
||
See [the detailed comments in `cluster.yaml`](https://github.com/coreos/kube-aws/blob/master/nodepool/config/templates/cluster.yaml) for further information. | ||
|
||
## Deploying a node pool powered by Spot Fleet | ||
|
||
Utilizing Spot Fleet gives us chances to dramatically reduce cost being spent on EC2 instances powering Kubernetes worker nodes while achieving reasonable availability. | ||
AWS says cost reduction is up to 90% but the cost would slightly vary among instance types and other users' bids. | ||
|
||
Spot Fleet support may change in backward-incompatible ways as it is still an experimenta feature. | ||
So, please use this feature at your own risk. | ||
However, we'd greatly appreciate your feedbacks because they do accelerate improvements in this area! | ||
|
||
This feature assumes you already have the IAM role with ARN like "arn:aws:iam::youraccountid:role/aws-ec2-spot-fleet-role" in your own AWS account. | ||
It implies that you've arrived "Spot Requests" in EC2 Dashboard in the AWS console at least once. | ||
See [the AWS documentation describing pre-requisites for Spot Fleet](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-fleet-requests.html#spot-fleet-prerequisites) for details. | ||
|
||
To add a node pool powered by Spot Fleet, edit node pool's `cluster.yaml`: | ||
|
||
```yaml | ||
worker: | ||
spotFleet: | ||
targetCapacity: 3 | ||
``` | ||
To customize your launch specifications to diversify your pool among instance types other than the defaults, edit `cluster.yaml`: | ||
|
||
```yaml | ||
worker: | ||
spotFleet: | ||
targetCapacity: 5 | ||
launchSpecifications: | ||
- weightedCapacity: 1 | ||
instanceType: m3.medium | ||
- weightedCapacity: 2 | ||
instanceType: m3.large | ||
- weightedCapacity: 2 | ||
instanceType: m4.large | ||
``` | ||
|
||
This configuration would normally result in Spot Fleet to bring up 3 instances to meet your target capacity: | ||
|
||
* 1x m3.medium = 1 capacity | ||
* 1x m3.large = 2 capacity | ||
* 1x m4.large = 2 capacity | ||
|
||
This is achieved by the `diversified` strategy of Spot Fleet. | ||
Please read [the AWS documentation describing Spot Fleet Allocation Strategy](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-fleet.html#spot-fleet-allocation-strategy) for more details. | ||
|
||
Please also see [the detailed comments in `cluster.yaml`](https://github.com/coreos/kube-aws/blob/master/nodepool/config/templates/cluster.yaml) and [the GitHub issue summarizing the initial implementation](https://github.com/coreos/kube-aws/issues/112) of this feature for further information. | ||
|
||
When you are done with your cluster, [destroy your cluster][aws-step-6] | ||
|
||
[aws-step-1]: kubernetes-on-aws.md | ||
[aws-step-2]: kubernetes-on-aws-render.md | ||
[aws-step-3]: kubernetes-on-aws-launch.md | ||
[aws-step-4]: kube-aws-cluster-updates.md | ||
[aws-step-5]: kubernetes-on-aws-node-pool.md | ||
[aws-step-6]: kubernetes-on-aws-destroy.md |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,26 @@ | ||
# Pre-requisites | ||
|
||
If you're deploying a cluster with kube-aws: | ||
|
||
* [EC2 instances whose types are larger than or equal to `m3.medium` should be chosen for the cluster to work reliably](https://github.com/coreos/kube-aws/issues/138) | ||
* [At least 3 etcd, 2 controller, 2 worker nodes are required to achieve high availability](https://github.com/coreos/kube-aws/issues/138#issuecomment-266432162) | ||
|
||
## Deploying to an existing VPC | ||
|
||
If you're deploying a cluster to an existing VPC: | ||
|
||
* Internet Gateway needs to be added to VPC before cluster can be created | ||
* Or [all the nodes will fail to launch because they can't pull docker images or ACIs required to run essential processes like fleet, hyperkube, etcd, awscli, cfn-signal, cfn-init.](https://github.com/coreos/kube-aws/issues/120) | ||
* Existing route tables to be reused by kube-aws must be tagged with the key `KubernetesCluster` and your cluster's name for the value. | ||
* Or [Kubernetes will fail to create ELBs correspond to Kubernetes services with `type=LoadBalancer`](https://github.com/coreos/kube-aws/issues/135) | ||
* ["DNS Hostnames" must be turned on before cluster can be created](https://github.com/coreos/kube-aws/issues/119) | ||
* Or etcd nodes are unable to communicate each other thus the cluster doesn't work at all | ||
|
||
Once you understand pre-requisites, you are [ready to launch your first Kubernetes cluster][aws-step-1]. | ||
|
||
[aws-step-1]: kubernetes-on-aws.md | ||
[aws-step-2]: kubernetes-on-aws-render.md | ||
[aws-step-3]: kubernetes-on-aws-launch.md | ||
[aws-step-4]: kube-aws-cluster-updates.md | ||
[aws-step-5]: kubernetes-on-aws-node-pool.md | ||
[aws-step-6]: kubernetes-on-aws-destroy.md |
Oops, something went wrong.