From e94afba91eb29c37b297c667f96d1104ee474d9a Mon Sep 17 00:00:00 2001 From: Dominic Yin Date: Mon, 25 Nov 2019 09:54:00 +0800 Subject: [PATCH] fix broken links --- contrib/dex/README.md | 2 +- docs/advanced-topics/use-an-existing-vpc.md | 2 +- docs/getting-started/step-2-render.md | 4 ++-- docs/getting-started/step-5-add-node-pool.md | 6 +++--- e2e/README.md | 2 +- 5 files changed, 8 insertions(+), 8 deletions(-) diff --git a/contrib/dex/README.md b/contrib/dex/README.md index 4c49d8e9c..a95f8927a 100644 --- a/contrib/dex/README.md +++ b/contrib/dex/README.md @@ -42,7 +42,7 @@ Examples are provided in `contrib/dex/elb` directory. 2. Ingress -An example that works with [nginx-controller](https://github.com/nginxinc/kubernetes-ingress/tree/master/nginx-controller) + [kube-lego](https://github.com/jetstack/kube-lego) is provided in `contrib/dex/ingress`. +An example that works with [nginx-ingress](https://github.com/nginxinc/kubernetes-ingress/tree/master/cmd/nginx-ingress) + [kube-lego](https://github.com/jetstack/kube-lego) is provided in `contrib/dex/ingress`. ## Configure `kubectl` for token authentication diff --git a/docs/advanced-topics/use-an-existing-vpc.md b/docs/advanced-topics/use-an-existing-vpc.md index ded80d731..9712c4f42 100644 --- a/docs/advanced-topics/use-an-existing-vpc.md +++ b/docs/advanced-topics/use-an-existing-vpc.md @@ -10,7 +10,7 @@ Please note that you don't need to care about modifications if you've instructed * Adding one or more subnet\(s\) to an existing VPC specified by the `vpcId` * Associating one or more subnet\(s\) to an existing route table specified by the `routeTableId` -See [`cluster.yaml`](https://github.com/kubernetes-incubator/kube-aws/blob/master/core/controlplane/config/templates/cluster.yaml) for more details. +See [`cluster.yaml.tmpl`](https://github.com/kubernetes-incubator/kube-aws/blob/master/builtin/files/cluster.yaml.tmpl) for more details. All the other configurations for existing AWS resources must be done properly by users before kube-aws is run. diff --git a/docs/getting-started/step-2-render.md b/docs/getting-started/step-2-render.md index e8d805075..21b5f00bc 100644 --- a/docs/getting-started/step-2-render.md +++ b/docs/getting-started/step-2-render.md @@ -327,7 +327,7 @@ Kube-aws supports "spreading" a cluster across any number of Availability Zones __A word of caution about EBS and Persistent Volumes__: Any pods deployed to a Multi-AZ cluster must mount EBS volumes via [Persistent Volume Claims](http://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims). Specifying the ID of the EBS volume directly in the pod spec will not work consistently if nodes are spread across multiple zones. -Read more about Kubernetes Multi-AZ cluster support [here](http://kubernetes.io/docs/admin/multiple-zones/). +Read more about Kubernetes Multi-AZ cluster support [here](https://kubernetes.io/docs/setup/best-practices/multiple-zones/). #### A common pitfall when deploying multi-AZ clusters in combination with cluster-autoscaler @@ -340,7 +340,7 @@ Read more about Kubernetes Multi-AZ cluster support [here](http://kubernetes.io/ A common pitfall in deploying cluster-autoscaler to a multi-AZ cluster is that you have to instruct an Auto Scaling Group not to spread over multiple availability zones or cluster-autoscaler results in instability while scaling out the nodes i.e. it takes unnecessary much time to finally bring up a node in the insufficient zone. > The autoscaling group should span 1 availability zone for the cluster autoscaler to work. If you want to distribute workloads evenly across zones, set up multiple ASGs, with a cluster autoscaler for each ASG. At the time of writing this, cluster autoscaler is unaware of availability zones and although autoscaling groups can contain instances in multiple availability zones when configured so, the cluster autoscaler can't reliably add nodes to desired zones. That's because AWS AutoScaling determines which zone to add nodes which is out of the control of the cluster autoscaler. For more information, see https://github.com/kubernetes/contrib/pull/1552#discussion_r75533090. -> https://github.com/kubernetes/contrib/tree/master/cluster-autoscaler/cloudprovider/aws#deployment-specification +> https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md#deployment-specification Please read the following guides carefully and select the appropriate deployment according to your requirement regarding auto-scaling. diff --git a/docs/getting-started/step-5-add-node-pool.md b/docs/getting-started/step-5-add-node-pool.md index f693f69e9..70195e5e3 100644 --- a/docs/getting-started/step-5-add-node-pool.md +++ b/docs/getting-started/step-5-add-node-pool.md @@ -63,7 +63,7 @@ Beware that you have to associate only 1 AZ to a node pool or cluster-autoscaler that what cluster-autoscaler does is to increase/decrease the desired capacity hence it has no way to selectively add node(s) in a desired AZ. Also note that deployment of cluster-autoscaler is currently out of scope of this documentation. -Please read [cluster-autoscaler's documentation](https://github.com/kubernetes/contrib/blob/master/cluster-autoscaler/cloudprovider/aws/README.md) for instructions on it. +Please read [cluster-autoscaler's documentation](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md#deployment-specification) for instructions on it. ## Customizing min/max size of the auto scaling group @@ -81,7 +81,7 @@ worker: rollingUpdateMinInstancesInService: 2 ``` -See [the detailed comments in `cluster.yaml`](https://github.com/kubernetes-incubator/kube-aws/blob/master/core/controlplane/config/templates/cluster.yaml) for further information. +See [the detailed comments in `cluster.yaml.tmpl`](https://github.com/kubernetes-incubator/kube-aws/blob/master/builtin/files/cluster.yaml.tmpl) for further information. ## Deploying a node pool powered by Spot Fleet @@ -141,7 +141,7 @@ This configuration would normally result in Spot Fleet to bring up 3 instances t This is achieved by the `diversified` strategy of Spot Fleet. Please read [the AWS documentation describing Spot Fleet Allocation Strategy](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-fleet.html#spot-fleet-allocation-strategy) for more details. -Please also see [the detailed comments in `cluster.yaml`](https://github.com/kubernetes-incubator/kube-aws/blob/master/core/controlplane/config/templates/cluster.yaml) and [the GitHub issue summarizing the initial implementation](https://github.com/kubernetes-incubator/kube-aws/issues/112) of this feature for further information. +Please also see [the detailed comments in `cluster.yaml.tmpl`](https://github.com/kubernetes-incubator/kube-aws/blob/master/builtin/files/cluster.yaml.tmpl) and [the GitHub issue summarizing the initial implementation](https://github.com/kubernetes-incubator/kube-aws/issues/112) of this feature for further information. You can optionally [configure various Kubernetes add-ons][getting-started-step-6] according to your requirements. When you are done with your cluster, [destroy your cluster][getting-started-step-7] diff --git a/e2e/README.md b/e2e/README.md index 30166b5bc..658a2f0fe 100644 --- a/e2e/README.md +++ b/e2e/README.md @@ -4,7 +4,7 @@ This directory contains a set of tools to run end-to-end testing for kube-aws. It is composed of: * Cluster creation using `kube-aws` -* [Kubernetes Conformance Tests](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/e2e-tests.md#conformance-tests) +* [Kubernetes Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-testing/e2e-tests.md#conformance-tests) To run e2e tests, you should have set all he required env vars. For convenience, creating `.envrc` used by `direnv` like as follows would be good.