-
-
Notifications
You must be signed in to change notification settings - Fork 273
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrading Calico #321
Comments
Would using tigers operator be an acceptable solution? I can open a PR tomorrow if that's the case. That way renovate or flux automation can stay on top of updates |
I'm taking over tigera-operator with helm too but it's not ideal because you manually need to apply the helm ownership labels to the CRDs and resources or else it will not install. See my notes on deploying the helm chart: |
I would be more inclined to support the method of installing Calico with the k3s HelmChart CRD and then take it over with a Flux HelmRelease but I haven't had time to explore this much |
That's fair to want to support upgrades. Could also add a job to do the relabeling. I already havea a messy bash script I can clean up for use: https://github.com/h3mmy/bloopySphere/blob/main/fix-crd.sh I'll check out the rancher HelmChart option |
Combing through the process, using the k3s HelmChart just seems like it's adding an extra step since the relabeling would still need to be performed with a Patch or Job. |
That's a bummer, I was hoping that it would add in the annotations for us. |
I'll try a dry run when I'm able. Just want to make sure. |
Right now this component is in limbo - it is not manged by either k3s or flux. Could we perhaps apply the helm ownership labels to the tigera-operator manifest on the ansible side, when it is first deployed to the cluster? |
I am not sure the best way forward to be honest, right now there's two methods:
Having a Ansible playbook just for applying the patches might be annoying to maintain moving forward, like calico adding another resource that needs to be patched or whatever. Ideally I would like to switch to Cilium but I am dead set on them implementing BGP without metallb hacks before I consider it. |
I was going to suggest scripting that to check what CRDs require patching and run a templated task. But that may be equally annoying to maintain. I'm hoping to switch to Cilium at some point as well. I'm currently trying to figure out how to transition the cluster to BGP first. |
I had to do those as well kubectl patch apiserver default --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-name": "tigera-operator"}}}'
kubectl patch apiserver default --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}'
kubectl patch apiserver default --type=merge -p '{"metadata": {"labels": {"app.kubernetes.io/managed-by": "Helm"}}}' |
Noting for anyone that stumbles onto this thread and has the following error when trying to
projectcalico/calico#6491 You'll want to use |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
Details
Describe the solution you'd like:
Document a way to upgrade Calico, for now the process can be done by running against an already provisioned cluster
After it is upgraded it is wise to manually bump the version in the Ansible config:
https://github.com/k8s-at-home/template-cluster-k3s/blob/63d077e1dd50cb0ae9af5c21d951bec1d78c60ad/provision/ansible/inventory/group_vars/kubernetes/k3s.yml#L31
The text was updated successfully, but these errors were encountered: