Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrading Calico #321

Closed
onedr0p opened this issue May 10, 2022 · 12 comments
Closed

Upgrading Calico #321

onedr0p opened this issue May 10, 2022 · 12 comments

Comments

@onedr0p
Copy link
Owner

onedr0p commented May 10, 2022

Details

Describe the solution you'd like:

Document a way to upgrade Calico, for now the process can be done by running against an already provisioned cluster

kubectl replace -f https://projectcalico.docs.tigera.io/archive/v3.24/manifests/tigera-operator.yaml

After it is upgraded it is wise to manually bump the version in the Ansible config:

https://github.com/k8s-at-home/template-cluster-k3s/blob/63d077e1dd50cb0ae9af5c21d951bec1d78c60ad/provision/ansible/inventory/group_vars/kubernetes/k3s.yml#L31

@onedr0p onedr0p pinned this issue Jun 10, 2022
@h3mmy
Copy link

h3mmy commented Jun 19, 2022

Would using tigers operator be an acceptable solution? I can open a PR tomorrow if that's the case. That way renovate or flux automation can stay on top of updates

Example

@onedr0p
Copy link
Owner Author

onedr0p commented Jun 19, 2022

I'm taking over tigera-operator with helm too but it's not ideal because you manually need to apply the helm ownership labels to the CRDs and resources or else it will not install.

See my notes on deploying the helm chart:

onedr0p/home-ops#3385

@onedr0p
Copy link
Owner Author

onedr0p commented Jun 19, 2022

I would be more inclined to support the method of installing Calico with the k3s HelmChart CRD and then take it over with a Flux HelmRelease but I haven't had time to explore this much

@h3mmy
Copy link

h3mmy commented Jun 19, 2022

That's fair to want to support upgrades. Could also add a job to do the relabeling. I already havea a messy bash script I can clean up for use: https://github.com/h3mmy/bloopySphere/blob/main/fix-crd.sh

I'll check out the rancher HelmChart option

@h3mmy
Copy link

h3mmy commented Jun 22, 2022

Combing through the process, using the k3s HelmChart just seems like it's adding an extra step since the relabeling would still need to be performed with a Patch or Job.

@onedr0p
Copy link
Owner Author

onedr0p commented Jun 22, 2022

That's a bummer, I was hoping that it would add in the annotations for us.

@h3mmy
Copy link

h3mmy commented Jun 23, 2022

I'll try a dry run when I'm able. Just want to make sure.

@haraldkoch
Copy link
Contributor

Right now this component is in limbo - it is not manged by either k3s or flux.

Could we perhaps apply the helm ownership labels to the tigera-operator manifest on the ansible side, when it is first deployed to the cluster?

@onedr0p
Copy link
Owner Author

onedr0p commented Aug 5, 2022

I am not sure the best way forward to be honest, right now there's two methods:

  1. Apply the new manifests with kubectl

    kubectl apply -f https://projectcalico.docs.tigera.io/archive/v3.22/manifests/tigera-operator.yaml
  2. Patch calico resources to add the helm ownership and then apply the HelmRelease or helm chart

    kubectl patch installation default --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}'
    kubectl patch installation default --type=merge -p '{"metadata": {"labels": {"app.kubernetes.io/managed-by": "Helm"}}}'
    kubectl patch installation default --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-name": "tigera-operator"}}}'
    kubectl patch podsecuritypolicy tigera-operator --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}'
    kubectl patch podsecuritypolicy tigera-operator --type=merge -p '{"metadata": {"labels": {"app.kubernetes.io/managed-by": "Helm"}}}'
    kubectl patch podsecuritypolicy tigera-operator --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-name": "tigera-operator"}}}'
    kubectl patch -n tigera-operator deployment tigera-operator --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}'
    kubectl patch -n tigera-operator deployment tigera-operator --type=merge -p '{"metadata": {"labels": {"app.kubernetes.io/managed-by": "Helm"}}}'
    kubectl patch -n tigera-operator deployment tigera-operator --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-name": "tigera-operator"}}}'
    kubectl patch -n tigera-operator serviceaccount tigera-operator --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}'
    kubectl patch -n tigera-operator serviceaccount tigera-operator --type=merge -p '{"metadata": {"labels": {"app.kubernetes.io/managed-by": "Helm"}}}'
    kubectl patch -n tigera-operator serviceaccount tigera-operator --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-name": "tigera-operator"}}}'
    kubectl patch clusterrole tigera-operator --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}'
    kubectl patch clusterrole tigera-operator --type=merge -p '{"metadata": {"labels": {"app.kubernetes.io/managed-by": "Helm"}}}'
    kubectl patch clusterrole tigera-operator --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-name": "tigera-operator"}}}'
    kubectl patch clusterrolebinding tigera-operator tigera-operator --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}'
    kubectl patch clusterrolebinding tigera-operator tigera-operator --type=merge -p '{"metadata": {"labels": {"app.kubernetes.io/managed-by": "Helm"}}}'
    kubectl patch clusterrolebinding tigera-operator tigera-operator --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-name": "tigera-operator"}}}'

Having a Ansible playbook just for applying the patches might be annoying to maintain moving forward, like calico adding another resource that needs to be patched or whatever.

Ideally I would like to switch to Cilium but I am dead set on them implementing BGP without metallb hacks before I consider it.

@h3mmy
Copy link

h3mmy commented Aug 7, 2022

I was going to suggest scripting that to check what CRDs require patching and run a templated task. But that may be equally annoying to maintain. I'm hoping to switch to Cilium at some point as well. I'm currently trying to figure out how to transition the cluster to BGP first.

@Diaoul
Copy link

Diaoul commented Sep 4, 2022

I had to do those as well

kubectl patch apiserver default --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-name": "tigera-operator"}}}'
kubectl patch apiserver default --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}'
kubectl patch apiserver default --type=merge -p '{"metadata": {"labels": {"app.kubernetes.io/managed-by": "Helm"}}}'

@sp3nx0r
Copy link
Contributor

sp3nx0r commented Sep 24, 2022

Noting for anyone that stumbles onto this thread and has the following error when trying to kubectl apply -f a newer version

The CustomResourceDefinition "installations.operator.tigera.io" is invalid: metadata.annotations: Too long: must have at most 262144 bytes

projectcalico/calico#6491 You'll want to use kubectl replace since these are CRDs.

@onedr0p onedr0p unpinned this issue Oct 7, 2022
Repository owner locked and limited conversation to collaborators Jan 9, 2023
@onedr0p onedr0p converted this issue into discussion #591 Jan 9, 2023

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants