Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Official 1.14 Release Docs #13174

Merged
merged 50 commits into from
Mar 25, 2019
Merged
Changes from 1 commit
Commits
Show all changes
50 commits
Select commit Hold shift + click to select a range
4652684
Official documentation on Poseidon/Firmament, a new multi-scheduler s…
Dec 23, 2018
cefff92
Document timeout attribute for kms-plugin. (#12158)
immutableT Jan 23, 2019
de2e67e
Official documentation on Poseidon/Firmament, a new multi-scheduler …
Jan 29, 2019
b822184
Remove initializers from doc. It will be removed in 1.14 (#12331)
caesarxuchao Jan 29, 2019
e528300
kubeadm: Document CRI auto detection functionality (#12462)
rosti Feb 8, 2019
ce380cc
Resolved merge conflict removing initializers
jimangel Feb 11, 2019
df1b59b
Minor doc change for GAing Pod DNS Config (#12514)
MrHohn Feb 12, 2019
eb5aaa7
Graduate ExpandInUsePersistentVolumes feature to beta (#10574)
mlmhl Feb 13, 2019
1588645
Rename 2018-11-07-grpc-load-balancing-with-linkerd.md.md file (#12594)
makoscafee Feb 13, 2019
48fd1e5
Add dynamic percentage of node scoring to user docs (#12235)
bsalamat Feb 15, 2019
d22320f
delete special symbol (#12445)
hyponet Feb 17, 2019
582995a
Update documentation for VolumeSubpathEnvExpansion (#11843)
Feb 20, 2019
16b551c
Graduate Pod Priority and Preemption to GA (#12428)
bsalamat Feb 20, 2019
99d3d86
Added Instana links to the documentation (#12977)
noctarius Mar 7, 2019
9742867
Update kubectl plugins to stable (#12847)
soltysh Mar 11, 2019
5f049ec
documentation for CSI topology beta (#12889)
msau42 Mar 11, 2019
98b449d
Document changes to default RBAC discovery ClusterRole(Binding)s (#12…
dekkagaijin Mar 12, 2019
ead0a28
CSI raw block to beta (#12931)
bswartz Mar 12, 2019
b37e645
Change incorrect string raw to block (#12926)
bswartz Mar 15, 2019
ac99ed4
Update documentation on node OS/arch labels (#12976)
yujuhong Mar 15, 2019
f7aa166
local pv GA doc updates (#12915)
msau42 Mar 15, 2019
f18d212
Publish CRD OpenAPI Documentation (#12910)
roycaihw Mar 15, 2019
90d53c2
kubeadm: add document for upgrading from 1.13 to 1.14 (single CP and …
neolit123 Mar 15, 2019
ed5f459
fix bullet indentation (#13214)
roycaihw Mar 15, 2019
6e49749
mark PodReadinessGate GA (#12800)
freehan Mar 16, 2019
cc769cb
Update RuntimeClass documentation for beta (#13043)
tallclair Mar 16, 2019
ee19771
CSI ephemeral volume alpha documentation (#10934)
vladimirvivien Mar 16, 2019
092e288
update kubectl documentation (#12867)
Liujingfang1 Mar 16, 2019
07c4eb3
Documentation for Windows GMSA feature (#12936)
ddebroy Mar 16, 2019
21d60d1
HugePages graduated to GA (#13004)
derekwaynecarr Mar 16, 2019
b36d68a
Docs for node PID limiting (https://github.com/kubernetes/kubernetes/…
RobertKrawitz Mar 16, 2019
c037ab5
kubeadm: update the reference documentation for 1.14 (#12911)
neolit123 Mar 16, 2019
f50c664
kubeadm: update the 1.14 HA guide (#13191)
neolit123 Mar 16, 2019
61372fe
resolve conflicts for master
jimangel Mar 16, 2019
a0b5acd
fixed a few missed merge conflicts
jimangel Mar 16, 2019
92fd5d4
Admission Webhook new features doc (#12938)
mbohlool Mar 18, 2019
3bf2d15
Clarifications and fixes in GMSA doc (#13226)
ddebroy Mar 18, 2019
e15667a
RunAsGroup documentation for Progressing this to Beta (#12297)
krmayankk Mar 18, 2019
655aed9
start serverside-apply documentation (#13077)
kwiesmueller Mar 18, 2019
965a801
Document CSI update (#12928)
gnufied Mar 19, 2019
cb0b9d0
Overall docs for CSI Migration feature (#12935)
ddebroy Mar 19, 2019
f1ffe72
Windows documentation updates for 1.14 (#12929)
craiglpeters Mar 19, 2019
94c455a
add section on upgrading CoreDNS (#12909)
rajansandeep Mar 19, 2019
30915de
documentation for kubelet resource metrics endpoint (#12934)
dashpole Mar 20, 2019
8f68521
windows docs updates for 1.14 (#13279)
michmike Mar 20, 2019
ae5d409
update to windows docs for 1.14 (#13322)
michmike Mar 22, 2019
74319b6
Update intro-windows-in-kubernetes.md (#13344)
michmike Mar 23, 2019
f902f7d
server side apply followup (#13321)
kwiesmueller Mar 23, 2019
87c1d6a
resolving conflicts
jimangel Mar 23, 2019
3459d02
Update config.toml (#13365)
jimangel Mar 25, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
kubeadm: update the 1.14 HA guide (#13191)
* kubeadm: update the 1.14 HA guide

* kubeadm: try to fix note/caution indent in HA page

* kubeadm: fix missing sudo and minor amends in HA doc

* kubeadm: apply latest amends to the HA doc for 1.14
neolit123 authored and k8s-ci-robot committed Mar 16, 2019
commit f50c664d78726b4345414bf741a734ef77615986
327 changes: 174 additions & 153 deletions content/en/docs/setup/independent/high-availability.md
Original file line number Diff line number Diff line change
@@ -19,15 +19,12 @@ control plane nodes and etcd members are separated.
Before proceeding, you should carefully consider which approach best meets the needs of your applications
and environment. [This comparison topic](/docs/setup/independent/ha-topology/) outlines the advantages and disadvantages of each.

Your clusters must run Kubernetes version 1.12 or later. You should also be aware that
setting up HA clusters with kubeadm is still experimental and will be further simplified
in future versions. You might encounter issues with upgrading your clusters, for example.
You should also be aware that setting up HA clusters with kubeadm is still experimental and will be further
simplified in future versions. You might encounter issues with upgrading your clusters, for example.
We encourage you to try either approach, and provide us with feedback in the kubeadm
[issue tracker](https://github.com/kubernetes/kubeadm/issues/new).

Note that the alpha feature gate `HighAvailability` is deprecated in v1.12 and removed in v1.13.

See also [The HA upgrade documentation](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-ha-1-13).
See also [The upgrade documentation](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-14).

{{< caution >}}
This page does not address running your cluster on a cloud provider. In a cloud
@@ -57,28 +54,12 @@ For the external etcd cluster only, you also need:

- Three additional machines for etcd members

{{< note >}}
The following examples run Calico as the Pod networking provider. If you run another
networking provider, make sure to replace any default values as needed.
{{< /note >}}

{{% /capture %}}

{{% capture steps %}}

## First steps for both methods

{{< note >}}
**Note**: All commands on any control plane or etcd node should be
run as root.
{{< /note >}}

- Some CNI network plugins like Calico require a CIDR such as `192.168.0.0/16` and
some like Weave do not. See the [CNI network
documentation](/docs/setup/independent/create-cluster-kubeadm/#pod-network).
To add a pod CIDR set the `podSubnet: 192.168.0.0/16` field under
the `networking` object of `ClusterConfiguration`.

### Create load balancer for kube-apiserver

{{< note >}}
@@ -119,38 +100,6 @@ option. Your cluster requirements may need a different configuration.

1. Add the remaining control plane nodes to the load balancer target group.

### Configure SSH

SSH is required if you want to control all nodes from a single machine.

1. Enable ssh-agent on your main device that has access to all other nodes in
the system:

```
eval $(ssh-agent)
```

1. Add your SSH identity to the session:

```
ssh-add ~/.ssh/path_to_private_key
```

1. SSH between nodes to check that the connection is working correctly.

- When you SSH to any node, make sure to add the `-A` flag:

```
ssh -A 10.0.0.7
```

- When using sudo on any node, make sure to preserve the environment so SSH
forwarding works:

```
sudo -E -s
```

## Stacked control plane and etcd nodes

### Steps for the first control plane node
@@ -160,141 +109,131 @@ SSH is required if you want to control all nodes from a single machine.
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: stable
apiServer:
certSANs:
- "LOAD_BALANCER_DNS"
controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT"

- `kubernetesVersion` should be set to the Kubernetes version to use. This
example uses `stable`.
- `controlPlaneEndpoint` should match the address or DNS and port of the load balancer.
- It's recommended that the versions of kubeadm, kubelet, kubectl and Kubernetes match.
1. Make sure that the node is in a clean state:
{{< note >}}
Some CNI network plugins like Calico require a CIDR such as `192.168.0.0/16` and
some like Weave do not. See the [CNI network
documentation](/docs/setup/independent/create-cluster-kubeadm/#pod-network).
To add a pod CIDR set the `podSubnet: 192.168.0.0/16` field under
the `networking` object of `ClusterConfiguration`.
{{< /note >}}
1. Initialize the control plane:
```sh
sudo kubeadm init --config=kubeadm-config.yaml
sudo kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs
```
You should see something like:
- The `--experimental-upload-certs` flags is used to upload the certificates that should be shared
across all the control-plane instances to the cluster. If instead, you prefer to copy certs across
control-plane nodes manually or using automation tools, please remove this flag and refer to [Manual
certificate distribution](#manual-certs) section bellow.
After the command completes you should see something like so:
```sh
...
You can now join any number of machines by running the following on each node
as root:
You can now join any number of control-plane node by running the following command on each as a root:
kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --experimental-control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07
kubeadm join 192.168.0.200:6443 --token j04n3m.octy8zely83cy2ts --discovery-token-ca-cert-hash sha256:84938d2a22203a8e56a787ec0c6ddad7bc7dbd52ebabc62fd5f4dbea72b14d1f
```
1. Copy this output to a text file. You will need it later to join other control plane nodes to the
cluster.
1. Apply the Weave CNI plugin:
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use kubeadm init phase upload-certs to reload certs afterward.
```sh
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866
```
1. Type the following and watch the pods of the components get started:
- Copy this output to a text file. You will need it later to join control plane and worker nodes to the cluster.
- When `--experimental-upload-certs` is used with `kubeadm init`, the certificates of the primary control plane
are encrypted and uploaded in the `kubeadm-certs` Secret.
- To re-upload the certificates and generate a new decryption key, use the following command on a control plane
node that is already joined to the cluster:
```sh
kubectl get pod -n kube-system -w
```
- It's recommended that you join new control plane nodes only after the first node has finished initializing.
```sh
sudo kubeadm init phase upload-certs --experimental-upload-certs
```
1. Copy the certificate files from the first control plane node to the rest:

In the following example, replace `CONTROL_PLANE_IPS` with the IP addresses of the
other control plane nodes.
```sh
USER=ubuntu # customizable
CONTROL_PLANE_IPS="10.0.0.7 10.0.0.8"
for host in ${CONTROL_PLANE_IPS}; do
scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:
scp /etc/kubernetes/pki/ca.key "${USER}"@$host:
scp /etc/kubernetes/pki/sa.key "${USER}"@$host:
scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:
scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:
scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:
scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt
scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key
scp /etc/kubernetes/admin.conf "${USER}"@$host:
done
```
{{< note >}}
The `kubeadm-certs` Secret and decryption key expire after two hours.
{{< /note >}}
{{< caution >}}
Copy only the certificates in the above list. kubeadm will take care of generating the rest of the certificates
with the required SANs for the joining control-plane instances. If you copy all the certificates by mistake,
the creation of additional nodes could fail due to a lack of required SANs.
As stated in the command output, the certificate-key gives access to cluster sensitive data, keep it secret!
{{< /caution >}}
### Steps for the rest of the control plane nodes
1. Apply the CNI plugin of your choice:
[Follow these instructions](/docs/setup/independent/create-cluster-kubeadm/#pod-network) to install
the CNI provider. Make sure the configuration corresponds to the Pod CIDR specified in the kubeadm
configuration file if applicable.
1. Move the files created by the previous step where `scp` was used:
In this example we are using Weave Net:
```sh
USER=ubuntu # customizable
mkdir -p /etc/kubernetes/pki/etcd
mv /home/${USER}/ca.crt /etc/kubernetes/pki/
mv /home/${USER}/ca.key /etc/kubernetes/pki/
mv /home/${USER}/sa.pub /etc/kubernetes/pki/
mv /home/${USER}/sa.key /etc/kubernetes/pki/
mv /home/${USER}/front-proxy-ca.crt /etc/kubernetes/pki/
mv /home/${USER}/front-proxy-ca.key /etc/kubernetes/pki/
mv /home/${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
mv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
mv /home/${USER}/admin.conf /etc/kubernetes/admin.conf
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
```
This process writes all the requested files in the `/etc/kubernetes` folder.

1. Start `kubeadm join` on this node using the join command that was previously given to you by `kubeadm init` on
the first node. It should look something like this:
1. Type the following and watch the pods of the control plane components get started:
```sh
sudo kubeadm join 192.168.0.200:6443 --token j04n3m.octy8zely83cy2ts --discovery-token-ca-cert-hash sha256:84938d2a22203a8e56a787ec0c6ddad7bc7dbd52ebabc62fd5f4dbea72b14d1f --experimental-control-plane
kubectl get pod -n kube-system -w
```
- Notice the addition of the `--experimental-control-plane` flag. This flag automates joining this
control plane node to the cluster.
### Steps for the rest of the control plane nodes
{{< caution >}}
You must join new control plane nodes sequentially, only after the first node has finished initializing.
{{< /caution >}}
1. Type the following and watch the pods of the components get started:
For each additional control plane node you should:
1. Execute the join command that was previously given to you by the `kubeadm init` output on the first node.
It should look something like this:
```sh
kubectl get pod -n kube-system -w
sudo kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --experimental-control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07
```
1. Repeat these steps for the rest of the control plane nodes.
- The `--experimental-control-plane` flag tells `kubeadm join` to create a new control plane.
- The `--certificate-key ...` will cause the control plane certificates to be downloaded
from the `kubeadm-certs` Secret in the cluster and be decrypted using the given key.
## External etcd nodes
Setting up a cluster with external etcd nodes is similar to the procedure used for stacked etcd
with the exception that you should setup etcd first, and you should pass the etcd information
in the kubeadm config file.
### Set up the etcd cluster
- Follow [these instructions](/docs/setup/independent/setup-ha-etcd-with-kubeadm/)
to set up the etcd cluster.
1. Follow [these instructions](/docs/setup/independent/setup-ha-etcd-with-kubeadm/)
to set up the etcd cluster.
### Set up the first control plane node
1. Setup SSH as described [here](#manual-certs).
1. Copy the following files from any node from the etcd cluster to this node:
1. Copy the following files from any etcd node in the cluster to the first control plane node:
```sh
export CONTROL_PLANE="ubuntu@10.0.0.7"
+scp /etc/kubernetes/pki/etcd/ca.crt "${CONTROL_PLANE}":
+scp /etc/kubernetes/pki/apiserver-etcd-client.crt "${CONTROL_PLANE}":
+scp /etc/kubernetes/pki/apiserver-etcd-client.key "${CONTROL_PLANE}":
scp /etc/kubernetes/pki/etcd/ca.crt "${CONTROL_PLANE}":
scp /etc/kubernetes/pki/apiserver-etcd-client.crt "${CONTROL_PLANE}":
scp /etc/kubernetes/pki/apiserver-etcd-client.key "${CONTROL_PLANE}":
```
- Replace the value of `CONTROL_PLANE` with the `user@host` of this machine.
- Replace the value of `CONTROL_PLANE` with the `user@host` of the first control plane machine.
### Set up the first control plane node
1. Create a file called `kubeadm-config.yaml` with the following contents:
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: stable
apiServer:
certSANs:
- "LOAD_BALANCER_DNS"
controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT"
etcd:
external:
@@ -306,49 +245,131 @@ the creation of additional nodes could fail due to a lack of required SANs.
certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
- The difference between stacked etcd and external etcd here is that we are using the `external` field for `etcd` in the kubeadm config. In the case of the stacked etcd topology this is managed automatically.
{{< note >}}
The difference between stacked etcd and external etcd here is that we are using
the `external` field for `etcd` in the kubeadm config. In the case of the stacked
etcd topology this is managed automatically.
{{< /note >}}
- Replace the following variables in the template with the appropriate values for your cluster:
- Replace the following variables in the config template with the appropriate values for your cluster:
- `LOAD_BALANCER_DNS`
- `LOAD_BALANCER_PORT`
- `ETCD_0_IP`
- `ETCD_1_IP`
- `ETCD_2_IP`
1. Run `kubeadm init --config kubeadm-config.yaml` on this node.
The following steps are exactly the same as described for stacked etcd setup:
1. Write the join command that is returned to a text file for later use.
1. Run `sudo kubeadm init --config kubeadm-config.yaml --experimental-upload-certs` on this node.
1. Apply the Weave CNI plugin:
1. Write the output join commands that are returned to a text file for later use.
1. Apply the CNI plugin of your choice. The given example is for Weave Net:
```sh
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
```
### Steps for the rest of the control plane nodes
To add the rest of the control plane nodes, follow [these instructions](#steps-for-the-rest-of-the-control-plane-nodes).
The steps are the same as for the stacked etcd setup, with the exception that a local
etcd member is not created.

To summarize:
The steps are the same as for the stacked etcd setup:
- Make sure the first control plane node is fully initialized.
- Copy certificates between the first control plane node and the other control plane nodes.
- Join each control plane node with the join command you saved to a text file, plus add the `--experimental-control-plane` flag.
- Join each control plane node with the join command you saved to a text file. It's recommended
to join the control plane nodes one at a time.
- Don't forget that the decryption key from `--certificate-key` expires after two hours, by default.
## Common tasks after bootstrapping control plane
### Install a pod network
### Install workers
[Follow these instructions](/docs/setup/independent/create-cluster-kubeadm/#pod-network) to install
the pod network. Make sure this corresponds to whichever pod CIDR you provided
in the master configuration file.
Worker nodes can be joined to the cluster with the command you stored previously
as the output from the `kubeadm init` command:
### Install workers
```sh
sudo kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866
```
## Manual certificate distribution {#manual-certs}
If you choose to not use `kubeadm init` with the `--experimental-upload-certs` flag this means that
you are going to have to manually copy the certificates from the primary control plane node to the
joining control plane nodes.
There are many ways to do this. In the following example we are using `ssh` and `scp`:
SSH is required if you want to control all nodes from a single machine.
1. Enable ssh-agent on your main device that has access to all other nodes in
the system:
```
eval $(ssh-agent)
```
1. Add your SSH identity to the session:
```
ssh-add ~/.ssh/path_to_private_key
```
1. SSH between nodes to check that the connection is working correctly.
- When you SSH to any node, make sure to add the `-A` flag:
```
ssh -A 10.0.0.7
```
Each worker node can now be joined to the cluster with the command returned from any of the
`kubeadm init` commands. The flag `--experimental-control-plane` should not be added to worker nodes.
- When using sudo on any node, make sure to preserve the environment so SSH
forwarding works:
```
sudo -E -s
```
1. After configuring SSH on all the nodes you should run the following script on the first control plane node after
running `kubeadm init`. This script will copy the certificates from the first control plane node to the other
control plane nodes:
In the following example, replace `CONTROL_PLANE_IPS` with the IP addresses of the
other control plane nodes.
```sh
USER=ubuntu # customizable
CONTROL_PLANE_IPS="10.0.0.7 10.0.0.8"
for host in ${CONTROL_PLANE_IPS}; do
scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:
scp /etc/kubernetes/pki/ca.key "${USER}"@$host:
scp /etc/kubernetes/pki/sa.key "${USER}"@$host:
scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:
scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:
scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:
scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt
scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key
done
```
{{< caution >}}
Copy only the certificates in the above list. kubeadm will take care of generating the rest of the certificates
with the required SANs for the joining control-plane instances. If you copy all the certificates by mistake,
the creation of additional nodes could fail due to a lack of required SANs.
{{< /caution >}}
1. Then on each joining control plane node you have to run the following script before running `kubeadm join`.
This script will move the previously copied certificates from the home directory to `/etc/kuberentes/pki`:
```sh
USER=ubuntu # customizable
mkdir -p /etc/kubernetes/pki/etcd
mv /home/${USER}/ca.crt /etc/kubernetes/pki/
mv /home/${USER}/ca.key /etc/kubernetes/pki/
mv /home/${USER}/sa.pub /etc/kubernetes/pki/
mv /home/${USER}/sa.key /etc/kubernetes/pki/
mv /home/${USER}/front-proxy-ca.crt /etc/kubernetes/pki/
mv /home/${USER}/front-proxy-ca.key /etc/kubernetes/pki/
mv /home/${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
mv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
```
{{% /capture %}}