Skip to content

Commit

Permalink
Merge pull request #3803 from gjtempleton/Control-Plane
Browse files Browse the repository at this point in the history
Follow WG Naming Recommendations on Master -> Control Plane
  • Loading branch information
k8s-ci-robot authored Jan 22, 2021
2 parents 3406b38 + 4fbe142 commit 7a786bc
Show file tree
Hide file tree
Showing 14 changed files with 32 additions and 25 deletions.
13 changes: 7 additions & 6 deletions cluster-autoscaler/FAQ.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
<!--TODO: Remove "previously referred to as master" references from this doc once this terminology is fully removed from k8s-->
# Frequently Asked Questions

# Older versions
Expand Down Expand Up @@ -633,8 +634,8 @@ The following startup parameters are supported for cluster autoscaler:
| --- | --- | --- |
| `cluster-name` | Autoscaled cluster name, if available | ""
| `address` | The address to expose prometheus metrics | :8085
| `kubernetes` | Kubernetes master location. Leave blank for default | ""
| `kubeconfig` | Path to kubeconfig file with authorization and master location information | ""
| `kubernetes` | Kubernetes API Server location. Leave blank for default | ""
| `kubeconfig` | Path to kubeconfig file with authorization and API Server location information | ""
| `cloud-config` | The path to the cloud provider configuration file. Empty string for no configuration file | ""
| `namespace` | Namespace in which cluster-autoscaler run | "kube-system"
| `scale-down-enabled` | Should CA scale down the cluster | true
Expand Down Expand Up @@ -674,7 +675,7 @@ The following startup parameters are supported for cluster autoscaler:
| `regional` | Cluster is regional | false
| `leader-elect` | Start a leader election client and gain leadership before executing the main loop.<br>Enable this when running replicated components for high availability | true
| `leader-elect-lease-duration` | The duration that non-leader candidates will wait after observing a leadership<br>renewal until attempting to acquire leadership of a led but unrenewed leader slot.<br>This is effectively the maximum duration that a leader can be stopped before it is replaced by another candidate.<br>This is only applicable if leader election is enabled | 15 seconds
| `leader-elect-renew-deadline` | The interval between attempts by the acting master to renew a leadership slot before it stops leading.<br>This must be less than or equal to the lease duration.<br>This is only applicable if leader election is enabled | 10 seconds
| `leader-elect-renew-deadline` | The interval between attempts by the active cluster-autoscaler to renew a leadership slot before it stops leading.<br>This must be less than or equal to the lease duration.<br>This is only applicable if leader election is enabled | 10 seconds
| `leader-elect-retry-period` | The duration the clients should wait between attempting acquisition and renewal of a leadership.<br>This is only applicable if leader election is enabled | 2 seconds
| `leader-elect-resource-lock` | The type of resource object that is used for locking during leader election.<br>Supported options are `endpoints` (default) and `configmaps` | "endpoints"
| `aws-use-static-instance-list` | Should CA fetch instance types in runtime or use a static list. AWS only | false
Expand Down Expand Up @@ -775,7 +776,7 @@ If both the cluster and CA appear healthy:

* If you expect some nodes to be added to make space for pending pods, but they are not added for a long time, check [I have a couple of pending pods, but there was no scale-up?](#i-have-a-couple-of-pending-pods-but-there-was-no-scale-up) section.

* If you have access to the master machine, check Cluster Autoscaler logs in `/var/log/cluster-autoscaler.log`. Cluster Autoscaler logs a lot of useful information, including why it considers a pod unremovable or what was its scale-up plan.
* If you have access to the control plane (previously referred to as master) machine, check Cluster Autoscaler logs in `/var/log/cluster-autoscaler.log`. Cluster Autoscaler logs a lot of useful information, including why it considers a pod unremovable or what was its scale-up plan.

* Check events added by CA to the pod object.

Expand All @@ -787,7 +788,7 @@ If both the cluster and CA appear healthy:

There are three options:

* Logs on the master node, in `/var/log/cluster-autoscaler.log`.
* Logs on the control plane (previously referred to as master) nodes, in `/var/log/cluster-autoscaler.log`.
* Cluster Autoscaler 0.5 and later publishes kube-system/cluster-autoscaler-status config map.
To see it, run `kubectl get configmap cluster-autoscaler-status -n kube-system
-o yaml`.
Expand Down Expand Up @@ -862,7 +863,7 @@ Depending on how long scale-ups have been failing, it may wait up to 30 minutes
```
This is the minimum number of nodes required for all e2e tests to pass. The tests should also pass if you set higher maximum nodes limit.
3. Run `go run hack/e2e.go -- --verbose-commands --up` to bring up your cluster.
4. SSH to the master node and edit `/etc/kubernetes/manifests/cluster-autoscaler.manifest` (you will need sudo for this).
4. SSH to the control plane (previously referred to as master) node and edit `/etc/kubernetes/manifests/cluster-autoscaler.manifest` (you will need sudo for this).
* If you want to test your custom changes set `image` to point at your own CA image.
* Make sure `--scale-down-enabled` parameter in `command` is set to `true`.
5. Run CA tests with:
Expand Down
7 changes: 4 additions & 3 deletions cluster-autoscaler/README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
<!--TODO: Remove "previously referred to as master" references from this doc once this terminology is fully removed from k8s-->
# Cluster Autoscaler

# Introduction
Expand All @@ -24,7 +25,7 @@ You should also take a look at the notes and "gotchas" for your specific cloud p

# Releases

We recommend using Cluster Autoscaler with the Kubernetes master version for which it was meant. The below combinations have been tested on GCP. We don't do cross version testing or compatibility testing in other environments. Some user reports indicate successful use of a newer version of Cluster Autoscaler with older clusters, however, there is always a chance that it won't work as expected.
We recommend using Cluster Autoscaler with the Kubernetes control plane (previously referred to as master) version for which it was meant. The below combinations have been tested on GCP. We don't do cross version testing or compatibility testing in other environments. Some user reports indicate successful use of a newer version of Cluster Autoscaler with older clusters, however, there is always a chance that it won't work as expected.

Starting from Kubernetes 1.12, versioning scheme was changed to match Kubernetes minor releases exactly.

Expand Down Expand Up @@ -52,7 +53,7 @@ For CA 1.1.2 and later, please check [release
notes.](https://github.com/kubernetes/autoscaler/releases)

CA version 1.1.1:
* Fixes around metrics in the multi-master configuration.
* Fixes around metrics in the multiple kube apiserver configuration.
* Fixes for unready nodes issues when quota is overrun.

CA version 1.1.0:
Expand Down Expand Up @@ -131,7 +132,7 @@ CA Version 0.3:

# Deployment

Cluster Autoscaler is designed to run on Kubernetes master node. This is the
Cluster Autoscaler is designed to run on Kubernetes control plane (previously referred to as master) node. This is the
default deployment strategy on GCP.
It is possible to run a customized deployment of Cluster Autoscaler on worker nodes, but extra care needs
to be taken to ensure that Cluster Autoscaler remains up and running. Users can put it into kube-system
Expand Down
9 changes: 5 additions & 4 deletions cluster-autoscaler/cloudprovider/aws/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -208,16 +208,17 @@ kubectl apply -f examples/cluster-autoscaler-one-asg.yaml
kubectl apply -f examples/cluster-autoscaler-multi-asg.yaml
```
## Master Node Setup
<!--TODO: Remove "previously referred to as master" references from this doc once this terminology is fully removed from k8s-->
## Control Plane (previously referred to as master) Node Setup
**NOTE**: This setup is not compatible with Amazon EKS.
To run a CA pod in master node - CA deployment should tolerate the master
`taint` and `nodeSelector` should be used to schedule the pods in master node.
To run a CA pod on a control plane node the CA deployment should tolerate the `master`
taint and `nodeSelector` should be used to schedule the pods on a control plane node.
Please replace `{{ node_asg_min }}`, `{{ node_asg_max }}` and `{{ name }}` with
your ASG setting in the yaml file.
```
kubectl apply -f examples/cluster-autoscaler-run-on-master.yaml
kubectl apply -f examples/cluster-autoscaler-run-on-control-plane.yaml
```
## Using Mixed Instances Policies and Spot Instances
Expand Down
7 changes: 4 additions & 3 deletions cluster-autoscaler/cloudprovider/azure/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,8 @@ Save the updated deployment manifest, then deploy cluster-autoscaler by running:
kubectl create -f cluster-autoscaler-vmss.yaml
```

To run a cluster autoscaler pod on a master node, the deployment should tolerate the `master` taint, and `nodeSelector` should be used to schedule pods. Use [cluster-autoscaler-vmss-master.yaml](examples/cluster-autoscaler-vmss-master.yaml) in this case.
<!--TODO: Remove "previously referred to as master" references from this doc once this terminology is fully removed from k8s-->
To run a cluster autoscaler pod on a control plane (previously referred to as master) node, the deployment should tolerate the `master` taint, and `nodeSelector` should be used to schedule pods. Use [cluster-autoscaler-vmss-control-plane.yaml](examples/cluster-autoscaler-vmss-control-plane.yaml) in this case.

To run a cluster autoscaler pod with Azure managed service identity (MSI), use [cluster-autoscaler-vmss-msi.yaml](examples/cluster-autoscaler-vmss-msi.yaml) instead.

Expand Down Expand Up @@ -172,7 +173,7 @@ Prerequisites:
- Get Azure credentials from the [**Permissions**](#permissions) step above.
- Get the name of the initial Azure deployment resource for the cluster. You can find this in the [Azure Portal](https://portal.azure.com) or with the `az deployment list` command. If there are multiple deployments, get the name of the first one.

Make a copy of [cluster-autoscaler-standard-master.yaml](examples/cluster-autoscaler-standard-master.yaml). Fill in the placeholder values for the `cluster-autoscaler-azure` secret data by base64-encoding each of your Azure credential fields.
Make a copy of [cluster-autoscaler-standard-control-plane.yaml](examples/cluster-autoscaler-standard-control-plane.yaml). Fill in the placeholder values for the `cluster-autoscaler-azure` secret data by base64-encoding each of your Azure credential fields.

- ClientID: `<base64-encoded-client-id>`
- ClientSecret: `<base64-encoded-client-secret>`
Expand Down Expand Up @@ -208,7 +209,7 @@ kubectl -n kube-system create secret generic cluster-autoscaler-azure-deploy-par
Then deploy cluster-autoscaler by running:

```sh
kubectl create -f cluster-autoscaler-standard-master.yaml
kubectl create -f cluster-autoscaler-standard-control-plane.yaml
```

To run a cluster autoscaler pod with Azure managed service identity (MSI), use [cluster-autoscaler-standard-msi.yaml](examples/cluster-autoscaler-standard-msi.yaml) instead.
Expand Down
2 changes: 1 addition & 1 deletion cluster-autoscaler/cloudprovider/exoscale/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ To deploy the CA on your Kubernetes cluster, you can use the manifest provided
as example:

```
kubectl apply -f ./examples/cluster-autoscaler-run-on-master.yaml
kubectl apply -f ./examples/cluster-autoscaler-run-on-control-plane.yaml
```


Expand Down
3 changes: 2 additions & 1 deletion cluster-autoscaler/cloudprovider/huaweicloud/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -165,7 +165,8 @@ openssl x509 -in /etc/kubernetes/pki/ca.crt -noout -pubkey | openssl rsa -pubin
chkconfig --add /etc/rc.d/init.d/init-k8s.sh
chkconfig /etc/rc.d/init.d/init-k8s.sh on
```
- Copy `~/.kube/config` from master node to this ECS `~./kube/config` to setup kubectl on this instance.
<!--TODO: Remove "previously referred to as master" references from this doc once this terminology is fully removed from k8s-->
- Copy `~/.kube/config` from a control plane (previously referred to as master) node to this ECS `~./kube/config` to setup kubectl on this instance.
- Go to Huawei Cloud `Image Management` Service and click on `Create Image`. Select type `System disk image`, select your ECS instance as `Source`, then give it a name and then create.
Expand Down
9 changes: 5 additions & 4 deletions cluster-autoscaler/cloudprovider/magnum/README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
<!--TODO: Remove "previously referred to as master" references from this doc once this terminology is fully removed from k8s-->
# Cluster Autoscaler for OpenStack Magnum
The cluster autoscaler for Magnum scales worker nodes within any
specified nodegroup. It will run as a `Deployment` in your cluster.
Expand Down Expand Up @@ -31,7 +32,7 @@ An example `ServiceAccount` is given in [examples/cluster-autoscaler-svcaccount.

The credentials for authenticating with OpenStack are stored in a secret and
mounted as a file inside the container. [examples/cluster-autoscaler-secret](examples/cluster-autoscaler-secret.yaml)
can be modified with the contents of your cloud-config. This file can be obtained from your master node,
can be modified with the contents of your cloud-config. This file can be obtained from your control plane (previously referred to as master) node,
in `/etc/kubernetes` (may be named `kube_openstack_config` instead of `cloud-config`).

## Autoscaler deployment
Expand Down Expand Up @@ -65,7 +66,7 @@ autoscalingGroups:
cloudConfigPath: "/etc/kubernetes/cloud-config"
```
For running on the master node and other suggested settings, see
For running on the control plane (previously referred to as master) node and other suggested settings, see
[examples/values-example.yaml](examples/values-example.yaml).
To deploy with node group autodiscovery (for cluster autoscaler v1.19+), see
[examples/values-autodiscovery.yaml](examples/values-autodiscovery.yaml).
Expand Down Expand Up @@ -119,7 +120,7 @@ If you are deploying the autoscaler into a cluster which already has more than o
it is best to deploy it onto any node which already has non-default kube-system pods,
to minimise the number of nodes which cannot be removed when scaling.
Or, if you are using a Magnum version which supports scheduling on the master node, then
Or, if you are using a Magnum version which supports scheduling on the control plane (previously referred to as master) node, then
the example deployment file
[examples/cluster-autoscaler-deployment-master.yaml](examples/cluster-autoscaler-deployment-master.yaml)
[examples/cluster-autoscaler-deployment-master.yaml](examples/cluster-autoscaler-deployment-control-plane.yaml)
can be used.
5 changes: 3 additions & 2 deletions cluster-autoscaler/cloudprovider/packet/README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
<!--TODO: Remove "previously referred to as master" references from this doc once this terminology is fully removed from k8s-->
# Cluster Autoscaler for Packet

The cluster autoscaler for [Packet](https://packet.com) worker nodes performs
Expand Down Expand Up @@ -86,7 +87,7 @@ If you are deploying the autoscaler into a cluster which already has more than o
it is best to deploy it onto any node which already has non-default kube-system pods,
to minimise the number of nodes which cannot be removed when scaling. For this reason in
the provided example the autoscaler pod has a nodeaffinity which forces it to deploy on
the master node.
the control plane (previously referred to as master) node.

### Changes

Expand All @@ -98,4 +99,4 @@ the master node.

4. Cloud inits in the examples have pinned versions for Kubernetes in order to minimize potential incompatibilities as a result of nodes provisioned with different Kubernetes versions.

5. In the provided cluster-autoscaler deployment example, the autoscaler pod has a nodeaffinity which forces it to deploy on the master node, so that the cluster-autoscaler can scale down all of the worker nodes. Without this change there was a possibility for the cluster-autoscaler to be deployed on a worker node that could not be downscaled.
5. In the provided cluster-autoscaler deployment example, the autoscaler pod has a nodeaffinity which forces it to deploy on the control plane (previously referred to as master) node, so that the cluster-autoscaler can scale down all of the worker nodes. Without this change there was a possibility for the cluster-autoscaler to be deployed on a worker node that could not be downscaled.
2 changes: 1 addition & 1 deletion vertical-pod-autoscaler/pkg/admission-controller/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ one and use current recommendation to set resource requests in the pod.
Its `--admission-control` flag should have `MutatingAdmissionWebhook` as one of
the values on the list and its `--runtime-config` flag should include
`admissionregistration.k8s.io/v1beta1=true`.
To change those flags, ssh to your master instance, edit
To change those flags, ssh to your API Server instance, edit
`/etc/kubernetes/manifests/kube-apiserver.manifest` and restart kubelet to pick
up the changes: ```sudo systemctl restart kubelet.service```
1. Generate certs by running `bash gencerts.sh`. This will use kubectl to create
Expand Down

0 comments on commit 7a786bc

Please sign in to comment.