Skip to content
This repository has been archived by the owner on Sep 30, 2020. It is now read-only.

fix broken links #1781

Merged
merged 1 commit into from
Nov 25, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion contrib/dex/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ Examples are provided in `contrib/dex/elb` directory.

2. Ingress

An example that works with [nginx-controller](https://github.com/nginxinc/kubernetes-ingress/tree/master/nginx-controller) + [kube-lego](https://github.com/jetstack/kube-lego) is provided in `contrib/dex/ingress`.
An example that works with [nginx-ingress](https://github.com/nginxinc/kubernetes-ingress/tree/master/cmd/nginx-ingress) + [kube-lego](https://github.com/jetstack/kube-lego) is provided in `contrib/dex/ingress`.


## Configure `kubectl` for token authentication
Expand Down
2 changes: 1 addition & 1 deletion docs/advanced-topics/use-an-existing-vpc.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Please note that you don't need to care about modifications if you've instructed
* Adding one or more subnet\(s\) to an existing VPC specified by the `vpcId`
* Associating one or more subnet\(s\) to an existing route table specified by the `routeTableId`

See [`cluster.yaml`](https://github.com/kubernetes-incubator/kube-aws/blob/master/core/controlplane/config/templates/cluster.yaml) for more details.
See [`cluster.yaml.tmpl`](https://github.com/kubernetes-incubator/kube-aws/blob/master/builtin/files/cluster.yaml.tmpl) for more details.

All the other configurations for existing AWS resources must be done properly by users before kube-aws is run.

Expand Down
4 changes: 2 additions & 2 deletions docs/getting-started/step-2-render.md
Original file line number Diff line number Diff line change
Expand Up @@ -327,7 +327,7 @@ Kube-aws supports "spreading" a cluster across any number of Availability Zones

__A word of caution about EBS and Persistent Volumes__: Any pods deployed to a Multi-AZ cluster must mount EBS volumes via [Persistent Volume Claims](http://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims). Specifying the ID of the EBS volume directly in the pod spec will not work consistently if nodes are spread across multiple zones.

Read more about Kubernetes Multi-AZ cluster support [here](http://kubernetes.io/docs/admin/multiple-zones/).
Read more about Kubernetes Multi-AZ cluster support [here](https://kubernetes.io/docs/setup/best-practices/multiple-zones/).

#### A common pitfall when deploying multi-AZ clusters in combination with cluster-autoscaler

Expand All @@ -340,7 +340,7 @@ Read more about Kubernetes Multi-AZ cluster support [here](http://kubernetes.io/
A common pitfall in deploying cluster-autoscaler to a multi-AZ cluster is that you have to instruct an Auto Scaling Group not to spread over multiple availability zones or cluster-autoscaler results in instability while scaling out the nodes i.e. it takes unnecessary much time to finally bring up a node in the insufficient zone.

> The autoscaling group should span 1 availability zone for the cluster autoscaler to work. If you want to distribute workloads evenly across zones, set up multiple ASGs, with a cluster autoscaler for each ASG. At the time of writing this, cluster autoscaler is unaware of availability zones and although autoscaling groups can contain instances in multiple availability zones when configured so, the cluster autoscaler can't reliably add nodes to desired zones. That's because AWS AutoScaling determines which zone to add nodes which is out of the control of the cluster autoscaler. For more information, see https://github.com/kubernetes/contrib/pull/1552#discussion_r75533090.
> https://github.com/kubernetes/contrib/tree/master/cluster-autoscaler/cloudprovider/aws#deployment-specification
> https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md#deployment-specification

Please read the following guides carefully and select the appropriate deployment according to your requirement regarding auto-scaling.

Expand Down
6 changes: 3 additions & 3 deletions docs/getting-started/step-5-add-node-pool.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ Beware that you have to associate only 1 AZ to a node pool or cluster-autoscaler
that what cluster-autoscaler does is to increase/decrease the desired capacity hence it has no way to selectively add node(s) in a desired AZ.

Also note that deployment of cluster-autoscaler is currently out of scope of this documentation.
Please read [cluster-autoscaler's documentation](https://github.com/kubernetes/contrib/blob/master/cluster-autoscaler/cloudprovider/aws/README.md) for instructions on it.
Please read [cluster-autoscaler's documentation](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md#deployment-specification) for instructions on it.

## Customizing min/max size of the auto scaling group

Expand All @@ -81,7 +81,7 @@ worker:
rollingUpdateMinInstancesInService: 2
```

See [the detailed comments in `cluster.yaml`](https://github.com/kubernetes-incubator/kube-aws/blob/master/core/controlplane/config/templates/cluster.yaml) for further information.
See [the detailed comments in `cluster.yaml.tmpl`](https://github.com/kubernetes-incubator/kube-aws/blob/master/builtin/files/cluster.yaml.tmpl) for further information.

## Deploying a node pool powered by Spot Fleet

Expand Down Expand Up @@ -141,7 +141,7 @@ This configuration would normally result in Spot Fleet to bring up 3 instances t
This is achieved by the `diversified` strategy of Spot Fleet.
Please read [the AWS documentation describing Spot Fleet Allocation Strategy](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-fleet.html#spot-fleet-allocation-strategy) for more details.

Please also see [the detailed comments in `cluster.yaml`](https://github.com/kubernetes-incubator/kube-aws/blob/master/core/controlplane/config/templates/cluster.yaml) and [the GitHub issue summarizing the initial implementation](https://github.com/kubernetes-incubator/kube-aws/issues/112) of this feature for further information.
Please also see [the detailed comments in `cluster.yaml.tmpl`](https://github.com/kubernetes-incubator/kube-aws/blob/master/builtin/files/cluster.yaml.tmpl) and [the GitHub issue summarizing the initial implementation](https://github.com/kubernetes-incubator/kube-aws/issues/112) of this feature for further information.

You can optionally [configure various Kubernetes add-ons][getting-started-step-6] according to your requirements.
When you are done with your cluster, [destroy your cluster][getting-started-step-7]
Expand Down
2 changes: 1 addition & 1 deletion e2e/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ This directory contains a set of tools to run end-to-end testing for kube-aws.
It is composed of:

* Cluster creation using `kube-aws`
* [Kubernetes Conformance Tests](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/e2e-tests.md#conformance-tests)
* [Kubernetes Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-testing/e2e-tests.md#conformance-tests)

To run e2e tests, you should have set all he required env vars.
For convenience, creating `.envrc` used by `direnv` like as follows would be good.
Expand Down