-
DetailsDescribe the solution you'd like: Document a way to upgrade Calico, for now the process can be done by running against an already provisioned cluster
After it is upgraded it is wise to manually bump the version in the Ansible config: |
Beta Was this translation helpful? Give feedback.
Replies: 12 comments 2 replies
-
Would using tigers operator be an acceptable solution? I can open a PR tomorrow if that's the case. That way renovate or flux automation can stay on top of updates |
Beta Was this translation helpful? Give feedback.
-
I'm taking over tigera-operator with helm too but it's not ideal because you manually need to apply the helm ownership labels to the CRDs and resources or else it will not install. See my notes on deploying the helm chart: |
Beta Was this translation helpful? Give feedback.
-
I would be more inclined to support the method of installing Calico with the k3s HelmChart CRD and then take it over with a Flux HelmRelease but I haven't had time to explore this much |
Beta Was this translation helpful? Give feedback.
-
That's fair to want to support upgrades. Could also add a job to do the relabeling. I already havea a messy bash script I can clean up for use: https://github.com/h3mmy/bloopySphere/blob/main/fix-crd.sh I'll check out the rancher HelmChart option |
Beta Was this translation helpful? Give feedback.
-
Combing through the process, using the k3s HelmChart just seems like it's adding an extra step since the relabeling would still need to be performed with a Patch or Job. |
Beta Was this translation helpful? Give feedback.
-
That's a bummer, I was hoping that it would add in the annotations for us. |
Beta Was this translation helpful? Give feedback.
-
I'll try a dry run when I'm able. Just want to make sure. |
Beta Was this translation helpful? Give feedback.
-
Right now this component is in limbo - it is not manged by either k3s or flux. Could we perhaps apply the helm ownership labels to the tigera-operator manifest on the ansible side, when it is first deployed to the cluster? |
Beta Was this translation helpful? Give feedback.
-
I am not sure the best way forward to be honest, right now there's two methods:
Having a Ansible playbook just for applying the patches might be annoying to maintain moving forward, like calico adding another resource that needs to be patched or whatever. Ideally I would like to switch to Cilium but I am dead set on them implementing BGP without metallb hacks before I consider it. |
Beta Was this translation helpful? Give feedback.
-
I was going to suggest scripting that to check what CRDs require patching and run a templated task. But that may be equally annoying to maintain. I'm hoping to switch to Cilium at some point as well. I'm currently trying to figure out how to transition the cluster to BGP first. |
Beta Was this translation helpful? Give feedback.
-
I had to do those as well kubectl patch apiserver default --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-name": "tigera-operator"}}}'
kubectl patch apiserver default --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}'
kubectl patch apiserver default --type=merge -p '{"metadata": {"labels": {"app.kubernetes.io/managed-by": "Helm"}}}' |
Beta Was this translation helpful? Give feedback.
-
Noting for anyone that stumbles onto this thread and has the following error when trying to
projectcalico/calico#6491 You'll want to use |
Beta Was this translation helpful? Give feedback.
I am not sure the best way forward to be honest, right now there's two methods:
Apply the new manifests with
kubectl
Patch calico resources to add the helm ownership and then apply the
HelmRelease
or helm chart