-
Notifications
You must be signed in to change notification settings - Fork 39.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
1.6.0 kubelet fails with error "misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd" #43805
Comments
Seeing the same. |
Same.
I also see the referenced cgroupfs error. |
@civik it has been resolved, you need to add --cgroup-driver=system to kubelet start up parameters in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf |
@civik kubeadm issue solved too. you MUST make sure you run a non alpha version of kubeadm. Here is what I haveand it started working properly: |
This looks like a bug in the kubelet package. It should default to the way docker is setup on the system. |
I'm having this issue too. I added |
im getting this issue as well but its not with kubeadmin. mine is the hard way built via vagrant on centos7 . |
for me, since I am on centos, running "yum install kubeadm-1.6.0-0.x86_64" does the trick. |
I opened #43819 |
I had the same problem when installing 1.6.0 on CentOS. Added But the node never gets ready. When I describe the node, I get these events:
|
Same here with Red Hat Enterprise Linux Server release 7.3 (Maipo) Mar 30 14:58:17 master01a kubelet: error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd" But looks like Resolved my issue. |
you can try yum remove docker docker-common & yum install docker-engine-1.12.6 |
does anyone have idea what is real difference between cgroupfs and systemd drivers? Which one I should really use? Currently it looks like centos/rhel comes with docker which have systemd specified by default. Why rhel/centos does not use docker default cgroupdriver (which is cgroupfs)? The weird thing here is that we have been using docker cgroupdriver systemd for a long time, and we do not have anything specified in kubelet. After upgrade to 1.6.0 we need to specify startup options, so is automatic support to read cgroup from docker removed or is it bug? |
Related discussions coreos/bugs#1435 |
The fix is for this is in the release repo so let's fold this into kubernetes/release#306 |
Not sure I got it right, but the rpms from http://yum.kubernetes.io/repos/kubernetes-el7-x86_64 still need the --cgroup-driver=systemd fix in 10-kubeadm.conf in order to get the kubelet service started. |
Agreed. This is a duplicate of kubernetes/release#306 but that issue is in the repository where the fix needs to be made so let's consolidate discussion there. |
@mikedanese - thanks, got it! |
Here is a
|
Last one solved the issue for me. |
@Cisneiros - you can try "yum update systemd" |
kubelet.service fails to start out of the box: kubelet[1650]: Error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd" due to kubernetes/kubernetes#43805. Add a hack until the upstream fix (kubernetes/release#313) trickles down into Fedora-26.
I'm getting totally opposite error: failed to create kubelet: misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs" -- Fixed. added "--exec-opt native.cgroupdriver=systemd" to docker options. |
I just had this issue, maybe this helps someone...: If you use the docker packages supplied via EPEL (package "docker", version 1.12.6), it works OOTB with the "systemd" driver. |
kubelet's cgroup driver is not same with docker's cgroup driver, so I update systemd -> cgroupfs.
restart kubelet everyting is ok |
I am facing the same issue on Ubuntu 16.04.2, pls let me know, if there are any workarounds cat /etc/issueUbuntu 16.04.2 LTS \n \l kubeadm versionkubeadm version: &version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.2", GitCommit:"922a86cfcd65915a9b2f69f3f193b8907d741d9c", GitTreeState:"clean", BuildDate:"2017-07-21T08:08:00Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} docker versionClient: Server: kubeadm init[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. |
I am in ArchLinux. Got it work by overriding ExecStart of docker service:
Based on this stack overflow answer: |
Just to add to @heartarea 's response Verify which cgroup driver dockerd is using
output Cgroup Driver: cgroupfs Verify kubeadm cgroup settings
Change it to match Docker's
restart it
Important NOTEyou'll need to change cgroup driver also in your nodes. Versionskubeadm: v1.7.3 |
Confirmed @jmarcos-cano steps at least cleared up kubeadm hanging on "waiting for control plane to become ready". Virtualbox VM with NAT and Bridged Network interfaces. ERROR in /var/log/messages: ERROR in /var/log/messages: Followed above steps, all control plane components are healthy. |
good job! |
I changed it to match Docker's. But, it does not work. My system is CentOS 7 vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf update KUBELET_CGROUP_ARGS=--cgroup-driver=systemd to KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs grep KUBELET_CGROUP_ARGS 10-kubeadm.conf ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS docker info |grep -i cgroup systemctl daemon-reload ./openshift start ./openshift version docker version Server: oc version kubeadm version |
skipping pod synchronization - [Failed to start ContainerManager systemd version does not support ability to start a slice as transient unit] |
@lxy16611 could you please explain more ? when you node is using you mean the OS of the node? or else i suppose master is a node itself |
I came across to a solution by removing completely the version of Docker from the base repository of CentOS and installing the Docker from the official repository, as explained here: https://docs.docker.com/install/linux/docker-ce/centos/#install-using-the-repository In fact, the CentOS based Docker comes with "systemd" as "cgroup-driver" (--cgroup-driver=systemd). So, instead of changing, the "cgroup-driver" between "cgroupfs" or "systemd", consider installing the Docker from official repository my following by following this link, https://docs.docker.com/install/linux/docker-ce/centos/#install-using-the-repository |
I got this error because I hadn't done the post-installation steps for installing docker (so you don't have to run docker as root). After doing that, I got a new error:
Have more work to do I guess |
alter docker's file in /etc/docker/daemon.json |
With minikube 1.22.0 I had to change |
same in kubeadm v1.22.1, this is caused by orignal docker (show docker cgroup driver: change (file:
to
|
For cluster installation with kubeadm, I refered to https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/#configuring-the-kubelet-cgroup-driver then run |
kubernetes 1.6.0, installation AIO with kubeadm
centos 7.3
when kubeadm init run, the following error gets reported and kubelet fails to start:
kubelet: error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"
The text was updated successfully, but these errors were encountered: