-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cilium: kubelet does not detect cilium CNI #10887
Comments
I got it working by using the kubespray 1.23.1 container and not setting the cilium version explicitly, only setting |
Same problem encountered here, please reopen. This is actually 2 issues in one :
|
Reopening, since I only avoided the issue and did not actually fix it, and apparently others run into the same. @manicole We do have the cilium While I no longer have the broken cluster I am pretty sure that it also had an empty |
@RubenMakandra Thanks for reopening and for your insights. Testing is in progress on my side ;) In addition, I found the issue #10684 mentionning the problem and proposing a way to solve it. I commented it to ask for a PR. |
Thanks for looking into the other issue and pull request! To sum it up (correct me if I'm wrong!), kubespray v2.24 only added support for upgrading cilium <1.14 to 1.14, but does not support provisioning clusters directly with cilium 1.14.
to
since the current release note does not appear to be correct in implying that it is possible to deploy a [new] cluster with cilium 1.14. Fixing the provisioning of new clusters with Cilium 1.14 would be highly appreciated too of course! |
I'm experiencing similar issues. |
/assign @cleman95 |
I'm not completely sure #10945 solves the issue, at least for cilium 1.15.X
Should we perhaps add this hint to group vars sample? |
I might have made a mistake with the path in the configMap. It should have been |
What happened?
I used kubespray v2.24 with
I did not set any other cilium related variable.
The playbook finished successfully, but all nodes (3 control plane, 2 worker) stayed NotReady.
Relevant output of
kubectl describe node control-plane-1
:Same log message was displayed directly querying kubelet logs on the node.
The cilium pods were all running, output of
cilium status
:cilium version
had the following output:cilium version
What did you expect to happen?
The nodes becoming ready/cilium being detected by kubelet.
How can we reproduce it (as minimally and precisely as possible)?
Running kubespray with
as variables, with Ubuntu 22.04 targets
OS
Nodes:
I run ansible in quay.io/kubespray/kubespray:v2.24.0
Version of Ansible
Version of Python
Python 3.10.12
Version of Kubespray (commit)
The containerimage quay.io/kubespray/kubespray:v2.24.0 was used
Network plugin used
cilium
Full inventory with variables
This contains some sensitive information, if it is strictly required I can upload it later with some redactions.
They do contain
"cilium_version": "v1.14.0"
and"kube_network_plugin": "cilium"
Command used to invoke ansible
ansible-playbook -i inventory/inventory.yaml cluster.yaml
Output of ansible run
https://gist.github.com/RubenMakandra/933719a5caa6cb1daa92b115dd6e37ef
Anything else we need to know
No response
The text was updated successfully, but these errors were encountered: