-
-
Notifications
You must be signed in to change notification settings - Fork 129
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failed to find subsystem mount for required subsystem: pids #16
Comments
I met the same issue on k8s v1.14.1 |
Based on the error message, it looks to me a side effect of Pid limiting, which includes Pod Pids Limit and Node Pids Limit features introduced on v1.14.0, that requires the The pids cgroups is not mounted on Raspbian:
I tried adding
For now, I see two options:
@alexellis, any inputs on that? |
Recompiling Raspbian kernel with CONFIG_CGROUP_PIDS enabled fixed this issue for me (I was running k8s v1.14.1) |
I hit this problem as well, can confirm downgrading to 1.13.5 is working fine. |
I have the same Error. Unfortunatly downgrading does not do the trick for me :( |
PIDs cgroup will be available on the next rpi release: raspberrypi/linux#2968 (comment). |
Thanks hgontijo to push for it :) |
Is there a good guide/tutorial on how to recompiling Raspbian kernel with CONFIG_CGROUP_PIDS enabled? |
Recompiling Raspbian kernel also worked for me. Here is the guide I followed: Kernel building. Add CONFIG_CGROUP_PIDS=y to "arch/arm/configs/bcmrpi_defconfig" (Raspbian source code). |
should a rpi-update fix this yet? Not sure how to tell when the firmware for rpi-4.19.y will be released. |
me too, k8s v1.14.2 |
@Mike-Dunton I was able to rpi-update to 4.19.46-v7+ today, and confirm the PIDS fix is in place. kubeadm 1.14.3 installs and inits fine. |
I was able to get my cluster upgraded to This does have the issue: I don't use any burstable pods on my cluster, but until |
I had the same issue. Did a Note: After upgrading in place i had to disable swap again. |
I've managed to upgrade my raspbian to buster, but it wasn't error free, the following issues where hit: iptables in nf_tables mode - kube-proxy only works in legacy mode kubernetes/kubernetes#71305 (comment) swap would be enabled after each boot: I noticed a substantial amount of errors being reported from docker reporting that cgroupsfs/net_prio being missing (even though it existed and mounted) - upgrading docker-ce to |
@davidcollom Thank you for those hints. Didn't notice that iptables wasn't working correctly but fixed it now with the command from your link. |
Given the issues we are finding with What do you think? |
I'm not sure how k3s would solve the issue at hand? Most of these are issues with either kube-proxy (iptables) and low level cgroups being available from the kernel, which were related to raspbian releases. I've been running my kubeadm cluster for a few months now, and the only issues I find are during upgrades (OS and kubernetes). The cluster in it's self is stable for day to day operations and requires little/no picking up. That being said, I have wondered about moving over to k3s as my etcd instance no longer fits along side the control plane and I have a dedicated RPi for etcd. My cluster consists of 3x amd64, 2x pine64(arm64), 2x RPi "masters" and 7x RPi workers/slaves. |
My current recommendation is to use k3s, it uses far fewer resources and works on ARM very well, no timing issues. https://blog.alexellis.io/test-drive-k3s-on-raspberry-pi/ Please try it and let us know if it resolves those issues. |
|
Thanks for the comments. I'm closing / archiving this issue now as it seems to have gone off topic. My recommendation is that all RPi users try k3s which is GA, compliant and better tested for RPi than kubeadm. |
PRs are still welcome for the kubeadm guide, just make sure they are tested and have specifics such as versions of Raspbian used and any other steps you ran. |
Expected Behaviour
kubernetes master node starts and kubectl get pods shows Ready status.
Current Behaviour
kubernetes master node starts, and kubectl get pods shows NotReady status
kubectl describe nodes shows this error in the event log:
Failed to update Node Allocatable Limits ["kubepods"]: failed to set supported cgroup subsystems for cgroup [kubepods]: Failed to find subsystem mount for required subsystem: pids
Possible Solution
not sure - disable whatever requires this cgroup? is it something new in 1.14? or enable that cgroup in rasberian lite somewhere? (I'm not cgroup expert, so I don't know how to even start)
Steps to Reproduce (for bugs)
(follow the guide in this repo, I get these results at the "Check everything worked:" step of the guide)
Context
Can't schedule pods / nodes not ready.
Your Environment
Docker version
docker version
(e.g. Docker 17.0.05 ):Docker version 18.09.0, build 4d60db4
What version of Kubernetes are you using?
kubectl version
:Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:53:57Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/arm"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:45:25Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/arm"}
Operating System and version (e.g. Linux, Windows, MacOS):
2018-11-13-raspbian-stretch-lite.img
4.14.79-v7+
What ARM or Raspberry Pi board are you using?
Rasberry Pi 3 B+
The text was updated successfully, but these errors were encountered: