Skip to content

Commit

Permalink
Documentation for v0.9.2
Browse files Browse the repository at this point in the history
  • Loading branch information
mumoshu committed Dec 15, 2016
1 parent 9e618c9 commit e43c5c2
Show file tree
Hide file tree
Showing 10 changed files with 454 additions and 191 deletions.
12 changes: 8 additions & 4 deletions Documentation/kube-aws-cluster-updates.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# kube-aws cluster updates
# Updating the Kubernetes cluster

## Types of cluster update
There are two distinct categories of cluster update.
Expand Down Expand Up @@ -35,7 +35,11 @@ Fortunately, CoreOS update engine will take care of keeping the members of the e

In the (near) future, etcd will be hosted on Kubernetes and this problem will no longer be relevant. Rather than concocting overly complex band-aid, we've decided to "punt" on this issue of the time being.

Once you have successfully updated your cluster, you are ready to [add node pools to your cluster][aws-step-5].




[aws-step-1]: kubernetes-on-aws.md
[aws-step-2]: kubernetes-on-aws-render.md
[aws-step-3]: kubernetes-on-aws-launch.md
[aws-step-4]: kube-aws-cluster-updates.md
[aws-step-5]: kubernetes-on-aws-node-pool.md
[aws-step-6]: kubernetes-on-aws-destroy.md
8 changes: 8 additions & 0 deletions Documentation/kubernetes-on-aws-destroy.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
## Destroy the cluster

When you are done with your cluster, run `kube-aws node-pools destroy` and `kube-aws destroy` then all cluster components will be destroyed.

If you created any node pool, you must delete these first by running `kube-aws node-pools destroy`, or `kube-aws destroy` will end up failing because node pools still references
AWS resources managed by the main cluster.

If you created any Kubernetes Services of type `LoadBalancer`, you must delete these first, as the CloudFormation cannot be fully destroyed if any externally-managed resources still exist.
12 changes: 6 additions & 6 deletions Documentation/kubernetes-on-aws-launch.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ This is the [third step of running Kubernetes on AWS][aws-step-1]. We're ready t
Now for the exciting part, creating your cluster:

```sh
$ kube-aws up
$ kube-aws up --s3-uri s3://<your-bucket-name>/<prefix>
```

**NOTE**: It can take some time after `kube-aws up` completes before the cluster is available. When the cluster is first being launched, it must download all container images for the cluster components (Kubernetes, dns, heapster, etc). Depending on the speed of your connection, it can take a few minutes before the Kubernetes api-server is available.
Expand All @@ -18,7 +18,7 @@ If you configured Route 53 settings in your configuration above via `createRecor

Otherwise, navigate to the DNS registrar hosting the zone for the provided external DNS name. Ensure a single A record exists, routing the value of `externalDNSName` defined in `cluster.yaml` to the externally-accessible IP of the master node instance.

You can invoke `kube-aws status` to get the cluster API IP address after cluster creation, if necessary. This command can take a while.
You can invoke `kube-aws status` to get the cluster API endpoint after cluster creation, if necessary. This command can take a while.

## Access the cluster

Expand Down Expand Up @@ -59,11 +59,11 @@ If you want to share, audit or back up your stack, use the export flag:
$ kube-aws up --export
```

## Destroy the cluster

When you are done with your cluster, simply run `kube-aws destroy` and all cluster components will be destroyed.
If you created any Kubernetes Services of type `LoadBalancer`, you must delete these first, as the CloudFormation cannot be fully destroyed if any externally-managed resources still exist.
Once you have successfully launched your cluster, you are ready to [update your cluster][aws-step-4].

[aws-step-1]: kubernetes-on-aws.md
[aws-step-2]: kubernetes-on-aws-render.md
[aws-step-3]: kubernetes-on-aws-launch.md
[aws-step-4]: kube-aws-cluster-updates.md
[aws-step-5]: kubernetes-on-aws-node-pool.md
[aws-step-6]: kubernetes-on-aws-destroy.md
20 changes: 20 additions & 0 deletions Documentation/kubernetes-on-aws-limitations.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# Known Limitations

## hostPort doesn't work

This isn't really an issue of kube-aws but rather Kubernetes and/or CNI issue.
Anyways, it doesn't work if `hostNetwork: false`.

If you want to deploy `nginx-ingress-controller` which requires `hostPort`, just set `hostNetwork: true`:

```
spec:
hostNetwork: true
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
name: nginx-ingress-lb
```

Relevant kube-aws issue: [does hostPort not work on kube-aws/CoreOS?](https://github.com/coreos/kube-aws/issues/91)

See [the upstream issue](https://github.com/kubernetes/kubernetes/issues/23920#issuecomment-254918942) for more information.
156 changes: 156 additions & 0 deletions Documentation/kubernetes-on-aws-node-pool.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,156 @@
# Node Pool

Node Pool allows you to bring up additional pools of worker nodes each with a separate configuration including:

* Instance Type
* Storage Type/Size/IOPS
* Instance Profile
* Additional, User-Provided Security Group(s)
* Spot Price
* AWS service to manage your EC2 instances: [Auto Scaling](http://docs.aws.amazon.com/autoscaling/latest/userguide/WhatIsAutoScaling.html) or [Spot Fleet](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-fleet.html)
* [Node labels](http://kubernetes.io/docs/user-guide/node-selection/)
* [Taints](https://github.com/kubernetes/kubernetes/issues/17190)

## Deploying a Multi-AZ cluster with cluster-autoscaler support with Node Pools

Edit the `cluster.yaml` file to decrease `workerCount`, which is meant to be number of worker nodes in the "main" cluster, down to zero:

```yaml
workerCount: 0
subnets:
- availabilityZone: us-west-1a
instanceCIDR: "10.0.0.0/24"
```
Update the main cluster to catch up changes made in `cluster.yaml`:

```
$ kube-aws update \
--s3-uri s3://<my-bucket>/<optional-prefix>
```
Create two node pools, each with a different subnet and an availability zone:
```
$ kube-aws node-pools init --node-pool-name first-pool-in-1a \
--availability-zone us-west-1a \
--key-name ${KUBE_AWS_KEY_NAME} \
--kms-key-arn ${KUBE_AWS_KMS_KEY_ARN}

$ kube-aws node-pools init --node-pool-name second-pool-in-1b \
--availability-zone us-west-1a \
--key-name ${KUBE_AWS_KEY_NAME} \
--kms-key-arn ${KUBE_AWS_KMS_KEY_ARN}
```
Edit the `cluster.yaml` for the first zone:
```
$ $EDITOR node-pools/first-pool-in-1a/cluster.yaml
```
```yaml
workerCount: 1
subnets:
- availabilityZone: us-west-1a
instanceCIDR: "10.0.1.0/24"
```

Edit the `cluster.yaml` for the second zone:

```
$ $EDITOR node-pools/second-pool-in-1b/cluster.yaml
```

```yaml
workerCount: 1
subnets:
- availabilityZone: us-west-1b
instanceCIDR: "10.0.2.0/24"
```
Launch the node pools:
```
$ kube-aws node-pools up --node-pool-name first-pool-in-1a \
--s3-uri s3://<my-bucket>/<optional-prefix>

$ kube-aws node-pools up --node-pool-name second-pool-in-1b \
--s3-uri s3://<my-bucket>/<optional-prefix>
```

Deployment of cluster-autoscaler is currently out of scope of this documentation.
Please read [cluster-autoscaler's documentation](https://github.com/kubernetes/contrib/blob/master/cluster-autoscaler/cloudprovider/aws/README.md) for instructions on it.

## Customizing min/max size of the auto scaling group

If you've chosen to power your worker nodes in a node pool with an auto scaling group, you can customize `MinSize`, `MaxSize`, `MinInstancesInService` in `cluster.yaml`:

Please read [the AWS documentation](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-group.html#aws-properties-as-group-prop) for more information on `MinSize`, `MaxSize`, `MinInstancesInService` for ASGs.

```
worker:
# Auto Scaling Group definition for workers. If only `workerCount` is specified, min and max will be the set to that value and `rollingUpdateMinInstancesInService` will be one less.
autoScalingGroup:
minSize: 1
maxSize: 3
rollingUpdateMinInstancesInService: 2
```

See [the detailed comments in `cluster.yaml`](https://github.com/coreos/kube-aws/blob/master/nodepool/config/templates/cluster.yaml) for further information.

## Deploying a node pool powered by Spot Fleet

Utilizing Spot Fleet gives us chances to dramatically reduce cost being spent on EC2 instances powering Kubernetes worker nodes while achieving reasonable availability.
AWS says cost reduction is up to 90% but the cost would slightly vary among instance types and other users' bids.

Spot Fleet support may change in backward-incompatible ways as it is still an experimenta feature.
So, please use this feature at your own risk.
However, we'd greatly appreciate your feedbacks because they do accelerate improvements in this area!

This feature assumes you already have the IAM role with ARN like "arn:aws:iam::youraccountid:role/aws-ec2-spot-fleet-role" in your own AWS account.
It implies that you've arrived "Spot Requests" in EC2 Dashboard in the AWS console at least once.
See [the AWS documentation describing pre-requisites for Spot Fleet](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-fleet-requests.html#spot-fleet-prerequisites) for details.

To add a node pool powered by Spot Fleet, edit node pool's `cluster.yaml`:

```yaml
worker:
spotFleet:
targetCapacity: 3
```
To customize your launch specifications to diversify your pool among instance types other than the defaults, edit `cluster.yaml`:

```yaml
worker:
spotFleet:
targetCapacity: 5
launchSpecifications:
- weightedCapacity: 1
instanceType: m3.medium
- weightedCapacity: 2
instanceType: m3.large
- weightedCapacity: 2
instanceType: m4.large
```

This configuration would normally result in Spot Fleet to bring up 3 instances to meet your target capacity:

* 1x m3.medium = 1 capacity
* 1x m3.large = 2 capacity
* 1x m4.large = 2 capacity

This is achieved by the `diversified` strategy of Spot Fleet.
Please read [the AWS documentation describing Spot Fleet Allocation Strategy](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-fleet.html#spot-fleet-allocation-strategy) for more details.

Please also see [the detailed comments in `cluster.yaml`](https://github.com/coreos/kube-aws/blob/master/nodepool/config/templates/cluster.yaml) and [the GitHub issue summarizing the initial implementation](https://github.com/coreos/kube-aws/issues/112) of this feature for further information.

When you are done with your cluster, [destroy your cluster][aws-step-6]

[aws-step-1]: kubernetes-on-aws.md
[aws-step-2]: kubernetes-on-aws-render.md
[aws-step-3]: kubernetes-on-aws-launch.md
[aws-step-4]: kube-aws-cluster-updates.md
[aws-step-5]: kubernetes-on-aws-node-pool.md
[aws-step-6]: kubernetes-on-aws-destroy.md
26 changes: 26 additions & 0 deletions Documentation/kubernetes-on-aws-prerequisites.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# Pre-requisites

If you're deploying a cluster with kube-aws:

* [EC2 instances whose types are larger than or equal to `m3.medium` should be chosen for the cluster to work reliably](https://github.com/coreos/kube-aws/issues/138)
* [At least 3 etcd, 2 controller, 2 worker nodes are required to achieve high availability](https://github.com/coreos/kube-aws/issues/138#issuecomment-266432162)

## Deploying to an existing VPC

If you're deploying a cluster to an existing VPC:

* Internet Gateway needs to be added to VPC before cluster can be created
* Or [all the nodes will fail to launch because they can't pull docker images or ACIs required to run essential processes like fleet, hyperkube, etcd, awscli, cfn-signal, cfn-init.](https://github.com/coreos/kube-aws/issues/120)
* Existing route tables to be reused by kube-aws must be tagged with the key `KubernetesCluster` and your cluster's name for the value.
* Or [Kubernetes will fail to create ELBs correspond to Kubernetes services with `type=LoadBalancer`](https://github.com/coreos/kube-aws/issues/135)
* ["DNS Hostnames" must be turned on before cluster can be created](https://github.com/coreos/kube-aws/issues/119)
* Or etcd nodes are unable to communicate each other thus the cluster doesn't work at all

Once you understand pre-requisites, you are [ready to launch your first Kubernetes cluster][aws-step-1].

[aws-step-1]: kubernetes-on-aws.md
[aws-step-2]: kubernetes-on-aws-render.md
[aws-step-3]: kubernetes-on-aws-launch.md
[aws-step-4]: kube-aws-cluster-updates.md
[aws-step-5]: kubernetes-on-aws-node-pool.md
[aws-step-6]: kubernetes-on-aws-destroy.md
Loading

0 comments on commit e43c5c2

Please sign in to comment.