-
Notifications
You must be signed in to change notification settings - Fork 114
/
kubernetes-upgrade.mdx
278 lines (180 loc) · 10.8 KB
/
kubernetes-upgrade.mdx
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
---
description: Upgrade to a newer version of Calico for Kubernetes.
---
import AutoHostendpointsMigrate from '@site/calico/_includes/components/AutoHostendpointsMigrate';
# Upgrade Calico on Kubernetes
## About upgrading $[prodname]
This page describes how to upgrade to $[version] from $[prodname] v3.15 or later. The
procedure varies by datastore type and install method.
If you are using $[prodname] in etcd mode on a Kubernetes cluster, we recommend upgrading to the Kubernetes API datastore [as discussed here](../datastore-migration.mdx).
If you have installed $[prodname] using the `calico.yaml` manifest, we recommend upgrading to the $[prodname] operator, [as discussed here](../operator-migration.mdx).
- [Upgrading an installation that was installed using Helm](#upgrading-an-installation-that-was-installed-using-helm)
- [Upgrading an installation that uses the operator](#upgrading-an-installation-that-uses-the-operator)
- [Upgrading an installation that uses manifests and the Kubernetes API datastore](#upgrading-an-installation-that-uses-manifests-and-the-kubernetes-api-datastore)
- [Upgrading an installation that connects directly to an etcd datastore](#upgrading-an-installation-that-uses-an-etcd-datastore)
:::note
Do not use older versions of `calicoctl` after the upgrade.
This may result in unexpected behavior and data.
:::
## Upgrade OwnerReferences
If you do not use OwnerReferences on resources in the projectcalico.org/v3 API group, you can skip this section.
Starting in $[prodname] v3.28, a change in the way UIDs are generated for projectcalico.org/v3 resources requires that you update any OwnerReferences
that refer to projectcalico.org/v3 resources as an owner. After upgrade, the UID for all projectcalico.org/v3 resources will be changed, resulting in any
owned resources being garbage collected by Kubernetes.
1. Remove any OwnerReferences from resources in your cluster that have `apiGroup: projectcalico.org/v3`.
1. Perform the upgrade normally.
1. Add new OwnerReferences to your resources referencing the new UID.
## Upgrading an installation that was installed using helm
Prior to release v3.23, the $[prodname] helm chart itself deployed the `tigera-operator` namespace and required that the helm release was
installed in the `default` namespace. Newer releases properly defer creation of the `tigera-operator` namespace to the user and allow installation
of the chart into the `tigera-operator` namespace.
When upgrading from a version of $[prodname] v3.22 or lower to a version of $[prodname] v3.23 or greater, you must complete the following steps to migrate
ownership of the helm resources to the new chart location.
### Upgrade from Calico versions prior to v3.23.0
1. Patch existing resources so that the new chart can assume ownership.
```
kubectl patch installation default --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}'
kubectl patch apiserver default --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}'
kubectl patch podsecuritypolicy tigera-operator --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}'
kubectl patch -n tigera-operator deployment tigera-operator --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}'
kubectl patch -n tigera-operator serviceaccount tigera-operator --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}'
kubectl patch clusterrole tigera-operator --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}'
kubectl patch clusterrolebinding tigera-operator tigera-operator --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}'
```
1. Apply the $[version] CRDs:
```bash
kubectl apply --server-side --force-conflicts -f $[manifestsUrl]/manifests/operator-crds.yaml
```
1. Install the helm chart in the `tigera-operator` namespace.
```
helm install $[prodnamedash] projectcalico/tigera-operator --version $[releaseTitle] --namespace tigera-operator
```
1. Once the install has succeeded, you can delete any old releases in the `default` namespace.
```
kubectl delete secret -n default -l name=calico,owner=helm --dry-run
```
:::note
The above command uses --dry-run to avoid making changes to your cluster. We recommend reviewing
the output and then re-running the command without --dry-run to commit to the changes.
:::
### All other upgrades
1. Apply the $[version] CRDs:
```bash
kubectl apply --server-side --force-conflicts -f $[manifestsUrl]/manifests/operator-crds.yaml
```
1. Run the helm upgrade:
```bash
helm upgrade $[prodnamedash] projectcalico/tigera-operator
```
## Upgrading an installation that uses the operator
1. Download the $[version] operator manifest.
```bash
curl $[manifestsUrl]/manifests/tigera-operator.yaml -O
```
1. Use the following command to initiate an upgrade.
```bash
kubectl apply --server-side --force-conflicts -f tigera-operator.yaml
```
## Upgrading an installation that uses manifests and the Kubernetes API datastore
1. Download the $[version] manifest that corresponds to your original installation method.
**$[prodname] for policy and networking**
```bash
curl $[manifestsUrl]/manifests/calico.yaml -o upgrade.yaml
```
**$[prodname] for policy and flannel for networking**
```bash
curl $[manifestsUrl]/manifests/canal.yaml -o upgrade.yaml
```
**$[prodname] for policy (advanced)**
```bash
curl $[manifestsUrl]/manifests/calico-policy-only.yaml -o upgrade.yaml
```
:::note
If you manually modified the manifest, you must manually apply the
same changes to the downloaded manifest.
:::
1. Use the following command to initiate a rolling update.
```bash
kubectl apply --server-side --force-conflicts -f upgrade.yaml
```
1. Watch the status of the upgrade as follows.
```bash
watch kubectl get pods -n kube-system
```
Verify that the status of all $[prodname] pods indicate `Running`.
```
$[noderunning]-hvvg8 2/2 Running 0 3m
$[noderunning]-vm8kh 2/2 Running 0 3m
$[noderunning]-w92wk 2/2 Running 0 3m
```
1. Remove any existing `calicoctl` instances, [install the new `calicoctl`](../calicoctl/install.mdx)
and [configure it to connect to your datastore](../calicoctl/configure/overview.mdx).
1. Use the following command to check the $[prodname] version number.
```bash
calicoctl version
```
It should return a `Cluster Version` of `$[version].x`.
1. If you have [enable application layer policy](../../network-policy/istio/app-layer-policy.mdx),
follow [the instructions below](#upgrading-if-you-have-application-layer-policy-enabled) to complete your upgrade. Skip this if you are not using Istio with $[prodname].
1. If you were upgrading from a version of Calico prior to v3.14 and followed the pre-upgrade steps for host endpoints above, review traffic logs from the temporary policy,
add any global network policies needed to allow traffic, and delete the temporary network policy **allow-all-upgrade**.
1. Congratulations! You have upgraded to $[prodname] $[version].
## Upgrading an installation that uses an etcd datastore
1. Download the $[version] manifest that corresponds to your original installation method.
**$[prodname] for policy and networking**
```bash
curl $[manifestsUrl]/manifests/calico-etcd.yaml -o upgrade.yaml
```
**$[prodname] for policy and flannel for networking**
```bash
curl $[manifestsUrl]/manifests/canal-etcd.yaml -o upgrade.yaml
```
:::note
You must manually apply the changes you made to the manifest
during installation to the downloaded $[version] manifest. At a minimum,
you must set the `etcd_endpoints` value.
:::
1. Use the following command to initiate a rolling update.
```bash
kubectl apply --server-side --force-conflicts -f upgrade.yaml
```
1. Watch the status of the upgrade as follows.
```bash
watch kubectl get pods -n kube-system
```
Verify that the status of all $[prodname] pods indicate `Running`.
```
calico-kube-controllers-6d4b9d6b5b-wlkfj 1/1 Running 0 3m
$[noderunning]-hvvg8 1/2 Running 0 3m
$[noderunning]-vm8kh 1/2 Running 0 3m
$[noderunning]-w92wk 1/2 Running 0 3m
```
:::tip
The $[noderunning] pods will report `1/2` in the `READY` column, as shown.
:::
1. Remove any existing `calicoctl` instances, [install the new `calicoctl`](../calicoctl/install.mdx)
and [configure it to connect to your datastore](../calicoctl/configure/overview.mdx).
1. Use the following command to check the $[prodname] version number.
```bash
calicoctl version
```
It should return a `Cluster Version` of `$[version]`.
1. If you have [enabled application layer policy](../../network-policy/istio/app-layer-policy.mdx),
follow [the instructions below](#upgrading-if-you-have-application-layer-policy-enabled) to complete your upgrade. Skip this if you are not using Istio with $[prodname].
1. If you were upgrading from a version of Calico prior to v3.14 and followed the pre-upgrade steps for host endpoints above, review traffic logs from the temporary policy,
add any global network policies needed to allow traffic, and delete the temporary network policy **allow-all-upgrade**.
1. Congratulations! You have upgraded to $[prodname] $[version].
## Upgrading if you have Application Layer Policy enabled
Dikastes is versioned the same as the rest of $[prodname], but an upgraded `calico-node` will still be able to work with a downlevel Dikastes
so that you will not lose data plane connectivity during the upgrade. Once `calico-node` is upgraded, you can begin redeploying your service pods
with the updated version of Dikastes.
If you have [enabled application layer policy](../../network-policy/istio/app-layer-policy.mdx),
take the following steps to upgrade the Dikastes sidecars running in your application pods. Skip these steps if you are not using Istio with $[prodname].
1. Update the Istio sidecar injector template to use the new version of Dikastes. Replace `<your Istio version>` below with
the full version string of your Istio install, for example `1.4.2`.
```bash
kubectl apply -f $[manifestsUrl]/manifests/alp/istio-inject-configmap-<your Istio version>.yaml
```
1. Once the new template is in place, newly created pods use the upgraded version of Dikastes. Perform a rolling update of each of your service deployments
to get them on the new version of Dikastes.
<AutoHostendpointsMigrate orch='Kubernetes' />