Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubernetes_kubelet_extra_args is futile #77

Closed
edigaryev opened this issue Jun 18, 2020 · 6 comments
Closed

kubernetes_kubelet_extra_args is futile #77

edigaryev opened this issue Jun 18, 2020 · 6 comments

Comments

@edigaryev
Copy link

When Kubernetes 1.16 is deployed on a system with Docker configured with native.cgroupdriver=systemd, Kubelet (which defaults to cgroupfs driver) fails to start:

kubelet[14617]: F0618 08:38:42.641673   14617 server.go:271] failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"

This failure in turn results in kubelet service being masked by systemd. And even if we try to set kubernetes_kubelet_extra_args beforehand:

kubernetes_kubelet_extra_args: "--cgroup-driver=systemd"

...this will have no effect, since kubelet-setup.yml does nothing to unmask the service again, so the Kubernetes deployment fails with kubelet service being in the masked state.

This seems to be related to #4.

@edigaryev
Copy link
Author

I've forgot to mention that this happens on at least Debian 10 with kubernetes_version set to 1.16.

I've pinned the masking issue down to the kubernetes-cni package, installed by this role:

- name: kubernetes-cni
state: present

After the installation, kubelet and kubeadm are removed:

# apt-get install kubernetes-cni
[...]
The following packages will be REMOVED:
  kubeadm kubelet
[...]

This is caused by the kubelet's debian/control file changes introduced recently: kubernetes/release#1330

The root cause seems to be the strange package constraints like Provides: kubernetes-cni and Obsoletes: kubernetes-cni, and this affects a lot of people: kubernetes/kubernetes#92242

@matthew-mcdermott
Copy link

+1

@matthew-mcdermott
Copy link

matthew-mcdermott commented Jun 18, 2020

@edigaryev It seems I was able to get past it with the following configuration:

` kubernetes_packages:

  • name: kubelet
    state: latest
  • name: kubectl
    state: latest
  • name: kubeadm
    state: latest
  • name: kubernetes-cni
    state: absent

`

` kubernetes_pod_network:

  • cni: 'calico'
  • cidr: '192.168.0.0/16'`

@edigaryev
Copy link
Author

So, in retrospect, this has nothing to do with the kubernetes_kubelet_extra_args itself, but caused by kubernetes-cni package, which seems to be not required anyway for most CNI implementations (kubernetes-sigs/image-builder#259).

I've elaborated a bit on the workaround by @matthew-mcdermott above and pushed changes in #79.

@edigaryev
Copy link
Author

edigaryev commented Jun 24, 2020

Now, after kubernetes/release#1375 is merged, removing the kubernetes-cni in the workaround above actually results in the removal of the kubelet, so it seems that we can simply keep everything as is and consider this issue resolved.

There are, however, some issues that still persist on CentOS when running playbook that is similar to master branch and they both seem to be related to the GPG key.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants